2017-10-23 15:56:34,510 | INFO | log.py ( 79) | setupRaet | Setting RAET log level 2 2017-10-23 15:56:34,523 | DEBUG | start_sovrin_node ( 39) | | You can find logs in /home/sovrin/.sovrin/Node1.log 2017-10-23 15:56:34,523 | DEBUG | start_sovrin_node ( 42) | | Sovrin related env vars: [] 2017-10-23 15:56:35,916 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: git 2017-10-23 15:56:35,956 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: hg 2017-10-23 15:56:36,103 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: svn 2017-10-23 15:56:36,104 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: bzr 2017-10-23 15:56:36,526 | DEBUG | selector_events.py ( 53) | __init__ | Using selector: EpollSelector 2017-10-23 15:56:36,528 | DEBUG | looper.py ( 123) | __init__ | Setting handler for SIGINT 2017-10-23 15:56:36,581 | DEBUG | ledger.py ( 206) | start | Starting ledger... 2017-10-23 15:56:36,582 | DEBUG | file_store.py ( 190) | appendNewLineIfReq | new line check for file: /home/sovrin/.sovrin/data/nodes/Node1/transactions_sandbox/1 2017-10-23 15:56:36,582 | DEBUG | ledger.py ( 78) | recoverTree | Recovering tree from transaction log 2017-10-23 15:56:36,642 | DEBUG | ledger.py ( 93) | recoverTree | Recovered tree in 0.06019907898735255 seconds 2017-10-23 15:56:36,700 | DEBUG | idr_cache.py ( 25) | __init__ | Initializing identity cache Node1 2017-10-23 15:56:36,728 | INFO | node.py (2408) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 15:56:36,757 | DEBUG | ledger.py ( 206) | start | Starting ledger... 2017-10-23 15:56:36,758 | DEBUG | file_store.py ( 190) | appendNewLineIfReq | new line check for file: /home/sovrin/.sovrin/data/nodes/Node1/pool_transactions_sandbox/1 2017-10-23 15:56:36,758 | DEBUG | ledger.py ( 78) | recoverTree | Recovering tree from transaction log 2017-10-23 15:56:36,814 | DEBUG | ledger.py ( 93) | recoverTree | Recovered tree in 0.055594366043806076 seconds 2017-10-23 15:56:36,814 | INFO | node.py (2408) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 15:56:36,884 | DEBUG | plugin_loader.py ( 95) | _load | skipping plugin plugin_firebase_stats_consumer[class: typing.Dict<~KT, ~VT>] because it does not have a 'pluginType' attribute 2017-10-23 15:56:36,884 | DEBUG | plugin_loader.py ( 95) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 15:56:36,885 | DEBUG | plugin_loader.py ( 95) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 15:56:36,885 | DEBUG | plugin_loader.py ( 95) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 15:56:36,885 | INFO | plugin_loader.py ( 116) | _load | plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer 2017-10-23 15:56:36,885 | DEBUG | plugin_loader.py ( 95) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 15:56:36,885 | DEBUG | has_action_queue.py ( 77) | startRepeating | checkPerformance will be repeating every 60 seconds 2017-10-23 15:56:36,885 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 1 to run in 60 seconds 2017-10-23 15:56:36,886 | INFO | replica.py ( 300) | h | Node1:0 set watermarks as 0 300 2017-10-23 15:56:36,886 | DISPLAY | node.py (1034) | addReplica | Node1 added replica Node1:0 to instance 0 (master) 2017-10-23 15:56:36,886 | INFO | replica.py ( 300) | h | Node1:1 set watermarks as 0 300 2017-10-23 15:56:36,886 | DISPLAY | node.py (1034) | addReplica | Node1 added replica Node1:1 to instance 1 (backup) 2017-10-23 15:56:36,886 | DEBUG | has_action_queue.py ( 77) | startRepeating | checkPerformance will be repeating every 10 seconds 2017-10-23 15:56:36,887 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 1 to run in 10 seconds 2017-10-23 15:56:36,887 | DEBUG | has_action_queue.py ( 77) | startRepeating | checkNodeRequestSpike will be repeating every 60 seconds 2017-10-23 15:56:36,887 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 2 to run in 60 seconds 2017-10-23 15:56:36,887 | DEBUG | plugin_helper.py ( 24) | loadPlugins | Plugin loading started to load plugins from basedir: /home/sovrin/.sovrin 2017-10-23 15:56:36,887 | DEBUG | plugin_helper.py ( 33) | loadPlugins | Plugin directory created at: /home/sovrin/.sovrin/plugins 2017-10-23 15:56:36,887 | DEBUG | plugin_helper.py ( 67) | loadPlugins | Total plugins loaded from basedir /home/sovrin/.sovrin are : 0 2017-10-23 15:56:36,887 | DEBUG | node.py ( 325) | __init__ | total plugins loaded in node: 0 2017-10-23 15:56:36,912 | DEBUG | ledger.py ( 206) | start | Starting ledger... 2017-10-23 15:56:36,913 | DEBUG | file_store.py ( 190) | appendNewLineIfReq | new line check for file: /home/sovrin/.sovrin/data/nodes/Node1/config_transactions/1 2017-10-23 15:56:36,913 | DEBUG | ledger.py ( 78) | recoverTree | Recovering tree from transaction log 2017-10-23 15:56:36,913 | DEBUG | ledger.py ( 93) | recoverTree | Recovered tree in 0.0003030860098078847 seconds 2017-10-23 15:56:36,941 | INFO | node.py (2408) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 15:56:36,942 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from stopped to starting 2017-10-23 15:56:36,942 | DEBUG | ledger.py ( 204) | start | Ledger already started. 2017-10-23 15:56:36,942 | INFO | zstack.py ( 312) | start | Node1 starting with restricted as True and reSetupAuth as True 2017-10-23 15:56:36,942 | DEBUG | authenticator.py ( 31) | start | Starting ZAP at inproc://zeromq.zap.1 2017-10-23 15:56:36,942 | DEBUG | base.py ( 72) | allow | Allowing 0.0.0.0 2017-10-23 15:56:36,943 | DEBUG | base.py ( 112) | configure_curve | Configure curve: *[/home/sovrin/.sovrin/Node1/public_keys] 2017-10-23 15:56:36,943 | DEBUG | zstack.py ( 339) | open | Node1 will bind its listener at 9701 2017-10-23 15:56:36,944 | INFO | stacks.py ( 76) | start | Node1 listening for other nodes at 0.0.0.0:9701 2017-10-23 15:56:36,944 | INFO | zstack.py ( 312) | start | Node1C starting with restricted as False and reSetupAuth as True 2017-10-23 15:56:36,944 | DEBUG | authenticator.py ( 31) | start | Starting ZAP at inproc://zeromq.zap.2 2017-10-23 15:56:36,944 | DEBUG | base.py ( 72) | allow | Allowing 0.0.0.0 2017-10-23 15:56:36,944 | DEBUG | base.py ( 112) | configure_curve | Configure curve: *[*] 2017-10-23 15:56:36,944 | DEBUG | zstack.py ( 339) | open | Node1C will bind its listener at 9702 2017-10-23 15:56:36,945 | INFO | node.py ( 594) | start | Node1 first time running... 2017-10-23 15:56:36,948 | DEBUG | kit_zstack.py ( 96) | connectToMissing | Node1 found the following missing connections: Node4, Node2, Node3 2017-10-23 15:56:36,949 | TRACE | remote.py ( 84) | connect | connecting socket 58 55275712 to remote Node4:HA(host='10.0.0.5', port=9707) 2017-10-23 15:56:36,949 | INFO | zstack.py ( 580) | connect | Node1 looking for Node4 at 10.0.0.5:9707 2017-10-23 15:56:36,950 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:36,950 | TRACE | remote.py ( 84) | connect | connecting socket 59 55379776 to remote Node2:HA(host='10.0.0.3', port=9703) 2017-10-23 15:56:36,950 | INFO | zstack.py ( 580) | connect | Node1 looking for Node2 at 10.0.0.3:9703 2017-10-23 15:56:36,950 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:36,951 | TRACE | remote.py ( 84) | connect | connecting socket 60 55416960 to remote Node3:HA(host='10.0.0.4', port=9705) 2017-10-23 15:56:36,951 | INFO | zstack.py ( 580) | connect | Node1 looking for Node3 at 10.0.0.4:9705 2017-10-23 15:56:36,951 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:36,951 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:56:36,964 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'pi' to Node4 2017-10-23 15:56:36,964 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'pi' to Node4 2017-10-23 15:56:36,964 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'pi' to Node3 2017-10-23 15:56:36,964 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'pi' to Node3 2017-10-23 15:56:36,964 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'pi' to Node2 2017-10-23 15:56:36,964 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'pi' to Node2 2017-10-23 15:56:42,669 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:42,669 | DEBUG | zstack.py ( 652) | handlePingPong | Node1 got ping from Node3 2017-10-23 15:56:42,670 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:42,670 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'po' to Node3 2017-10-23 15:56:42,670 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'po' to Node3 2017-10-23 15:56:42,697 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 2 messages through listener 2017-10-23 15:56:42,698 | DEBUG | zstack.py ( 658) | handlePingPong | Node1 got pong from Node3 2017-10-23 15:56:42,698 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:42,698 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'LEDGER_STATUS', 'ppSeqNo': None, 'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ledgerId': 0}, 'Node3') 2017-10-23 15:56:42,698 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:42,699 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} from Node3 2017-10-23 15:56:42,699 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:42,699 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:42,718 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from set() to {'Node3'} 2017-10-23 15:56:42,718 | INFO | keep_in_touch.py ( 96) | _connsChanged | Node1 now connected to Node3 2017-10-23 15:56:42,719 | DEBUG | node.py (2593) | send | Node1 sending message LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} to 1 recipients: ['Node3'] 2017-10-23 15:56:42,721 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA","txnSeqNo":4,"ledgerId":0}' to Node3 2017-10-23 15:56:42,721 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA","txnSeqNo":4,"ledgerId":0}' to Node3 2017-10-23 15:56:44,941 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:44,942 | DEBUG | zstack.py ( 652) | handlePingPong | Node1 got ping from Node2 2017-10-23 15:56:44,942 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:44,943 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'po' to Node2 2017-10-23 15:56:44,943 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'po' to Node2 2017-10-23 15:56:44,958 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:44,958 | DEBUG | zstack.py ( 658) | handlePingPong | Node1 got pong from Node2 2017-10-23 15:56:44,959 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from {'Node3'} to {'Node2', 'Node3'} 2017-10-23 15:56:44,959 | INFO | keep_in_touch.py ( 96) | _connsChanged | Node1 now connected to Node2 2017-10-23 15:56:44,959 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from starting to started_hungry 2017-10-23 15:56:44,959 | DEBUG | node.py ( 918) | checkInstances | Node1 choosing to start election on the basis of count 3 and nodes {'Node2', 'Node3'} 2017-10-23 15:56:44,960 | DEBUG | primary_selector.py ( 74) | get_msgs_for_lagged_nodes | Node1 has no ViewChangeDone message to send for view 0 2017-10-23 15:56:44,960 | DEBUG | node.py ( 879) | send_current_state_to_lagging_node | Node1 sending current state CURRENT_STATE{'primary': [], 'viewNo': 0} to lagged node Node2 2017-10-23 15:56:44,960 | DEBUG | node.py (2593) | send | Node1 sending message CURRENT_STATE{'primary': [], 'viewNo': 0} to 1 recipients: ['Node2'] 2017-10-23 15:56:44,960 | DEBUG | node.py (2593) | send | Node1 sending message LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} to 1 recipients: ['Node2'] 2017-10-23 15:56:44,963 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node2 into one transmission 2017-10-23 15:56:44,963 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"primary":[],"op":"CURRENT_STATE","viewNo":0}', b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA","txnSeqNo":4,"ledgerId":0}']) 2017-10-23 15:56:44,963 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node2: b'{"op":"BATCH","messages":["{\\"primary\\":[],\\"op\\":\\"CURRENT_STATE\\",\\"viewNo\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\",\\"txnSeqNo\\":4,\\"ledgerId\\":0}"],"signature":null}' 2017-10-23 15:56:44,964 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"primary\\":[],\\"op\\":\\"CURRENT_STATE\\",\\"viewNo\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\",\\"txnSeqNo\\":4,\\"ledgerId\\":0}"],"signature":null}' to Node2 2017-10-23 15:56:45,004 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,005 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:45,005 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'LEDGER_STATUS', 'ppSeqNo': None, 'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ledgerId': 0}, 'Node2') 2017-10-23 15:56:45,005 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:45,006 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} from Node2 2017-10-23 15:56:45,006 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:45,006 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:45,006 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node2', 'Node3'} that its ledger of type 0 is latest 2017-10-23 15:56:45,006 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found from ledger status LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} that it does not need catchup 2017-10-23 15:56:45,006 | DEBUG | node.py (1537) | preLedgerCatchUp | Node1 going to process any ordered requests before starting catchup. 2017-10-23 15:56:45,006 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:0 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,007 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 0 before starting catch up 2017-10-23 15:56:45,007 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:1 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,007 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 1 before starting catch up 2017-10-23 15:56:45,007 | DEBUG | node.py (2451) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 15:56:45,007 | DEBUG | monitor.py ( 183) | reset | Monitor being reset 2017-10-23 15:56:45,007 | DEBUG | node.py (1547) | preLedgerCatchUp | Node1 reverted 0 batches before starting catch up for ledger 0 2017-10-23 15:56:45,007 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:45,007 | DEBUG | node.py ( 918) | checkInstances | Node1 choosing to start election on the basis of count 3 and nodes {'Node2', 'Node3'} 2017-10-23 15:56:45,007 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} to all recipients: ['Node4', 'Node3', 'Node2'] 2017-10-23 15:56:45,007 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node1 for ledger status of ledger 2 2017-10-23 15:56:45,008 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,008 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node2 for ledger status of ledger 2 2017-10-23 15:56:45,008 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,008 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node3 for ledger status of ledger 2 2017-10-23 15:56:45,008 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,008 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node4 for ledger status of ledger 2 2017-10-23 15:56:45,008 | DEBUG | ledger_manager.py (1006) | processStashedLedgerStatuses | Node1 going to process 0 stashed ledger statuses for ledger 2 2017-10-23 15:56:45,008 | INFO | ledger_manager.py ( 831) | catchupCompleted | Node1 completed catching up ledger 0, caught up 0 in total 2017-10-23 15:56:45,009 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node4 into one transmission 2017-10-23 15:56:45,009 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}']) 2017-10-23 15:56:45,010 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node4: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' 2017-10-23 15:56:45,010 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' to Node4 2017-10-23 15:56:45,010 | WARNING | zstack.py ( 704) | transmit | Remote Node4 is not connected - message will not be sent immediately.If this problem does not resolve itself - check your firewall settings 2017-10-23 15:56:45,010 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node3 into one transmission 2017-10-23 15:56:45,010 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}']) 2017-10-23 15:56:45,010 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node3: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' 2017-10-23 15:56:45,010 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' to Node3 2017-10-23 15:56:45,010 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node2 into one transmission 2017-10-23 15:56:45,010 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}']) 2017-10-23 15:56:45,010 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node2: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' 2017-10-23 15:56:45,011 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":2}}"],"signature":null}' to Node2 2017-10-23 15:56:45,023 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,023 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: BATCH{'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}'], 'signature': None} 2017-10-23 15:56:45,024 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}'], 'signature': None}, 'Node3') 2017-10-23 15:56:45,024 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}'], 'signature': None} 2017-10-23 15:56:45,024 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,024 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 15:56:45,024 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,024 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,024 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 15:56:45,025 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,025 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node3 2017-10-23 15:56:45,025 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,025 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,026 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node3 2017-10-23 15:56:45,026 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,026 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,053 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,053 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}'], 'signature': None} 2017-10-23 15:56:45,053 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}'], 'signature': None}, 'Node2') 2017-10-23 15:56:45,053 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}'], 'signature': None} 2017-10-23 15:56:45,054 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,054 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 15:56:45,054 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,054 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,054 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 15:56:45,054 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,055 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node2 2017-10-23 15:56:45,055 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,055 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,055 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node2', 'Node3'} that its ledger of type 2 is latest 2017-10-23 15:56:45,055 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found from ledger status LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} that it does not need catchup 2017-10-23 15:56:45,055 | DEBUG | node.py (1537) | preLedgerCatchUp | Node1 going to process any ordered requests before starting catchup. 2017-10-23 15:56:45,055 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:0 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,055 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 0 before starting catch up 2017-10-23 15:56:45,056 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:1 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,056 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 1 before starting catch up 2017-10-23 15:56:45,056 | DEBUG | node.py (2451) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 15:56:45,056 | DEBUG | monitor.py ( 183) | reset | Monitor being reset 2017-10-23 15:56:45,056 | DEBUG | node.py (1547) | preLedgerCatchUp | Node1 reverted 0 batches before starting catch up for ledger 2 2017-10-23 15:56:45,056 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,056 | INFO | pool_config.py ( 33) | processLedger | processing config ledger for any POOL_CONFIGs 2017-10-23 15:56:45,056 | INFO | upgrader.py ( 145) | processLedger | Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv processing config ledger for any upgrades 2017-10-23 15:56:45,057 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} to all recipients: ['Node4', 'Node3', 'Node2'] 2017-10-23 15:56:45,057 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node1 for ledger status of ledger 1 2017-10-23 15:56:45,057 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,057 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node2 for ledger status of ledger 1 2017-10-23 15:56:45,057 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,057 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node3 for ledger status of ledger 1 2017-10-23 15:56:45,057 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,057 | DEBUG | node.py ( 832) | _ask_for_ledger_status | Node1 asking Node4 for ledger status of ledger 1 2017-10-23 15:56:45,057 | DEBUG | ledger_manager.py (1006) | processStashedLedgerStatuses | Node1 going to process 0 stashed ledger statuses for ledger 1 2017-10-23 15:56:45,057 | INFO | ledger_manager.py ( 831) | catchupCompleted | Node1 completed catching up ledger 2, caught up 0 in total 2017-10-23 15:56:45,058 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node2 2017-10-23 15:56:45,058 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,058 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,059 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node4 into one transmission 2017-10-23 15:56:45,060 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}']) 2017-10-23 15:56:45,060 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node4: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' 2017-10-23 15:56:45,071 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' to Node4 2017-10-23 15:56:45,071 | WARNING | zstack.py ( 704) | transmit | Remote Node4 is not connected - message will not be sent immediately.If this problem does not resolve itself - check your firewall settings 2017-10-23 15:56:45,071 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node3 into one transmission 2017-10-23 15:56:45,071 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}']) 2017-10-23 15:56:45,071 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node3: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' 2017-10-23 15:56:45,071 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' to Node3 2017-10-23 15:56:45,071 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node2 into one transmission 2017-10-23 15:56:45,071 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', b'{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}']) 2017-10-23 15:56:45,072 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node2: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' 2017-10-23 15:56:45,072 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"op\\":\\"MESSAGE_REQUEST\\",\\"params\\":{\\"ledgerId\\":1}}"],"signature":null}' to Node2 2017-10-23 15:56:45,100 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,103 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,103 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None}, 'Node2') 2017-10-23 15:56:45,103 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,103 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,103 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 15:56:45,103 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,103 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,104 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 15:56:45,104 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,104 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node2 2017-10-23 15:56:45,104 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,104 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,105 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node2 2017-10-23 15:56:45,105 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,105 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,107 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,108 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: BATCH{'messages': ['{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}'], 'signature': None} 2017-10-23 15:56:45,108 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}'], 'signature': None}, 'Node3') 2017-10-23 15:56:45,108 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":2},"op":"MESSAGE_REQUEST"}'], 'signature': None} 2017-10-23 15:56:45,108 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,108 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 15:56:45,108 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,109 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,109 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 15:56:45,109 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,109 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,109 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node3') 2017-10-23 15:56:45,109 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,109 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,109 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node3') 2017-10-23 15:56:45,109 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,110 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node3 2017-10-23 15:56:45,110 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,110 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,110 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node3', 'Node2'} that its ledger of type 1 is latest 2017-10-23 15:56:45,110 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found from ledger status LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} that it does not need catchup 2017-10-23 15:56:45,110 | DEBUG | node.py (1537) | preLedgerCatchUp | Node1 going to process any ordered requests before starting catchup. 2017-10-23 15:56:45,111 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:0 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,111 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 0 before starting catch up 2017-10-23 15:56:45,111 | DEBUG | replica.py (2130) | _remove_ordered_from_queue | Node1:1 going to remove 0 Ordered messages from outbox 2017-10-23 15:56:45,111 | DEBUG | node.py (1918) | force_process_ordered | Node1 processed 0 Ordered batches for instance 1 before starting catch up 2017-10-23 15:56:45,111 | DEBUG | node.py (2451) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 15:56:45,111 | DEBUG | monitor.py ( 183) | reset | Monitor being reset 2017-10-23 15:56:45,111 | DEBUG | node.py (1547) | preLedgerCatchUp | Node1 reverted 0 batches before starting catch up for ledger 1 2017-10-23 15:56:45,111 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,111 | INFO | ledger_manager.py ( 831) | catchupCompleted | Node1 completed catching up ledger 1, caught up 0 in total 2017-10-23 15:56:45,111 | DEBUG | node.py (1653) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 15:56:45,111 | DEBUG | node.py (2451) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 15:56:45,111 | DEBUG | monitor.py ( 183) | reset | Monitor being reset 2017-10-23 15:56:45,112 | DEBUG | primary_selector.py ( 186) | _hasViewChangeQuorum | Node1 needs 2 ViewChangeDone messages 2017-10-23 15:56:45,112 | DEBUG | node.py (1625) | caught_up_for_current_view | Node1 does not have view change quorum for view 0 2017-10-23 15:56:45,112 | DEBUG | node.py (1608) | is_catchup_needed | Node1 is not caught up for the current view 0 2017-10-23 15:56:45,112 | DEBUG | node.py (1653) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 15:56:45,112 | DEBUG | node.py (1611) | is_catchup_needed | Node1 ordered till last prepared certificate 2017-10-23 15:56:45,112 | INFO | node.py (1593) | allLedgersCaughtUp | Node1 does not need any more catchups 2017-10-23 15:56:45,113 | DEBUG | primary_decider.py ( 131) | send | Node1's elector sending VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [(0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')], 'viewNo': 0} 2017-10-23 15:56:45,113 | DEBUG | primary_selector.py ( 186) | _hasViewChangeQuorum | Node1 needs 1 ViewChangeDone messages 2017-10-23 15:56:45,113 | DEBUG | primary_selector.py ( 258) | _startSelection | Node1 cannot start primary selection found failure in primary verification. This can happen due to lack of appropriate ViewChangeDone messages 2017-10-23 15:56:45,113 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node3 2017-10-23 15:56:45,114 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,114 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,114 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,115 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,116 | DEBUG | node.py (2593) | send | Node1 sending message VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [(0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')], 'viewNo': 0} to all recipients: ['Node4', 'Node3', 'Node2'] 2017-10-23 15:56:45,116 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"name":"Node1","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"op":"VIEW_CHANGE_DONE"}' to Node4 2017-10-23 15:56:45,116 | WARNING | zstack.py ( 704) | transmit | Remote Node4 is not connected - message will not be sent immediately.If this problem does not resolve itself - check your firewall settings 2017-10-23 15:56:45,116 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'{"name":"Node1","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"op":"VIEW_CHANGE_DONE"}' to Node4 2017-10-23 15:56:45,116 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 3 msgs to Node3 into one transmission 2017-10-23 15:56:45,117 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"name":"Node1","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"op":"VIEW_CHANGE_DONE"}']) 2017-10-23 15:56:45,117 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node3: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"name\\":\\"Node1\\",\\"ledgerInfo\\":[[0,4,\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\"],[1,6,\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\"],[2,0,\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\"]],\\"viewNo\\":0,\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null}' 2017-10-23 15:56:45,117 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"name\\":\\"Node1\\",\\"ledgerInfo\\":[[0,4,\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\"],[1,6,\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\"],[2,0,\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\"]],\\"viewNo\\":0,\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null}' to Node3 2017-10-23 15:56:45,117 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"name":"Node1","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"op":"VIEW_CHANGE_DONE"}' to Node2 2017-10-23 15:56:45,117 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'{"name":"Node1","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"op":"VIEW_CHANGE_DONE"}' to Node2 2017-10-23 15:56:45,139 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,140 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}'], 'signature': None} 2017-10-23 15:56:45,140 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}'], 'signature': None}, 'Node2') 2017-10-23 15:56:45,140 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}'], 'signature': None} 2017-10-23 15:56:45,140 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,140 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node2') 2017-10-23 15:56:45,140 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,140 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,140 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node2') 2017-10-23 15:56:45,140 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,141 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,142 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,145 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node2 into one transmission 2017-10-23 15:56:45,145 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 15:56:45,145 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node2: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' 2017-10-23 15:56:45,145 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' to Node2 2017-10-23 15:56:45,147 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,147 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}'], 'signature': None} 2017-10-23 15:56:45,147 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}'], 'signature': None}, 'Node3') 2017-10-23 15:56:45,147 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', '{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}'], 'signature': None} 2017-10-23 15:56:45,148 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,148 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node3') 2017-10-23 15:56:45,148 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,148 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,148 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node3') 2017-10-23 15:56:45,148 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,148 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,149 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node3'] 2017-10-23 15:56:45,150 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node3 into one transmission 2017-10-23 15:56:45,150 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 15:56:45,150 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node3: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' 2017-10-23 15:56:45,151 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' to Node3 2017-10-23 15:56:45,185 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,185 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,185 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None}, 'Node2') 2017-10-23 15:56:45,185 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,186 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,186 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node2') 2017-10-23 15:56:45,186 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,186 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,186 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node2') 2017-10-23 15:56:45,186 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,187 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,187 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node2'] 2017-10-23 15:56:45,188 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 2 msgs to Node2 into one transmission 2017-10-23 15:56:45,188 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 15:56:45,188 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node2: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' 2017-10-23 15:56:45,189 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' to Node2 2017-10-23 15:56:45,204 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,204 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node3: VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,204 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'op': 'VIEW_CHANGE_DONE', 'viewNo': 0}, 'Node3') 2017-10-23 15:56:45,204 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,205 | DEBUG | node.py (1261) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0}, 'Node3') 2017-10-23 15:56:45,206 | DEBUG | primary_selector.py ( 103) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node3 : VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,207 | INFO | primary_selector.py ( 192) | _hasViewChangeQuorum | Node1 got view change quorum (2 >= 2) 2017-10-23 15:56:45,208 | DEBUG | primary_selector.py ( 209) | has_view_change_from_primary | Node1 received ViewChangeDone from primary Node1 2017-10-23 15:56:45,208 | DEBUG | primary_selector.py ( 231) | has_sufficient_same_view_change_done_messages | Node1 found acceptable primary Node1 and ledger info ((0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')) 2017-10-23 15:56:45,209 | DEBUG | primary_selector.py ( 272) | _startSelection | Node1 starting selection 2017-10-23 15:56:45,210 | DISPLAY | primary_selector.py ( 284) | _startSelection | Node1:0 selected primary Node1:0 for instance 0 (view 0) 2017-10-23 15:56:45,210 | INFO | node.py ( 482) | start_participating | Node1 started participating 2017-10-23 15:56:45,210 | INFO | replica.py ( 393) | primaryName | Node1:0 setting primaryName for view no 0 to: Node1:0 2017-10-23 15:56:45,211 | DEBUG | replica.py (1625) | _gc | Node1:0 cleaning up till (0, 0) 2017-10-23 15:56:45,211 | DEBUG | replica.py (1640) | _gc | Node1:0 found 0 3-phase keys to clean 2017-10-23 15:56:45,211 | DEBUG | replica.py (1642) | _gc | Node1:0 found 0 request keys to clean 2017-10-23 15:56:45,211 | INFO | replica.py ( 300) | h | Node1:0 set watermarks as 0 300 2017-10-23 15:56:45,211 | DISPLAY | primary_selector.py ( 307) | _startSelection | Node1:0 declares view change 0 as completed for instance 0, new primary is Node1:0, ledger info is [(0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')] 2017-10-23 15:56:45,213 | DISPLAY | primary_selector.py ( 284) | _startSelection | Node1:1 selected primary Node2:1 for instance 1 (view 0) 2017-10-23 15:56:45,213 | INFO | replica.py ( 393) | primaryName | Node1:1 setting primaryName for view no 0 to: Node2:1 2017-10-23 15:56:45,213 | DEBUG | replica.py (1625) | _gc | Node1:1 cleaning up till (0, 0) 2017-10-23 15:56:45,213 | DEBUG | replica.py (1640) | _gc | Node1:1 found 0 3-phase keys to clean 2017-10-23 15:56:45,213 | DEBUG | replica.py (1642) | _gc | Node1:1 found 0 request keys to clean 2017-10-23 15:56:45,213 | INFO | replica.py ( 300) | h | Node1:1 set watermarks as 0 300 2017-10-23 15:56:45,213 | DEBUG | replica.py ( 494) | _setup_for_non_master | Node1:1 Setting last ordered for non-master as (0, 0) 2017-10-23 15:56:45,213 | DEBUG | replica.py ( 310) | last_ordered_3pc | Node1:1 set last ordered as (0, 0) 2017-10-23 15:56:45,214 | DISPLAY | primary_selector.py ( 307) | _startSelection | Node1:1 declares view change 0 as completed for instance 1, new primary is Node2:1, ledger info is [(0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')] 2017-10-23 15:56:45,230 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,230 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node2: VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,230 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'op': 'VIEW_CHANGE_DONE', 'viewNo': 0}, 'Node2') 2017-10-23 15:56:45,230 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,231 | DEBUG | node.py (1261) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0}, 'Node2') 2017-10-23 15:56:45,232 | DEBUG | primary_selector.py ( 103) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node2 : VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,233 | DEBUG | message_processor.py ( 28) | discard | Node1 discarding message VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} because it already decided primary which is Node1:0 2017-10-23 15:56:45,562 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,562 | DEBUG | zstack.py ( 652) | handlePingPong | Node1 got ping from Node4 2017-10-23 15:56:45,563 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 15:56:45,564 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'po' to Node4 2017-10-23 15:56:45,564 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'po' to Node4 2017-10-23 15:56:45,611 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,611 | DEBUG | zstack.py ( 658) | handlePingPong | Node1 got pong from Node4 2017-10-23 15:56:45,611 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: BATCH{'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,612 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None}, 'Node4') 2017-10-23 15:56:45,612 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":2}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}', '{"msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 15:56:45,612 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,612 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 15:56:45,612 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,612 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,612 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 15:56:45,612 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,613 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,613 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 15:56:45,613 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,613 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,613 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 15:56:45,613 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': {'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,613 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node4 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node4', 'Node2'} that its ledger of type 2 is latest 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} from Node4 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,614 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 2 of size 0 with 0 2017-10-23 15:56:45,615 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node4', 'Node2'} that its ledger of type 2 is latest 2017-10-23 15:56:45,615 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node4 2017-10-23 15:56:45,615 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,615 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,615 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node4', 'Node3'} that its ledger of type 1 is latest 2017-10-23 15:56:45,616 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} from Node4 2017-10-23 15:56:45,616 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,616 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 1 of size 6 with 6 2017-10-23 15:56:45,616 | DEBUG | ledger_manager.py ( 309) | processLedgerStatus | Node1 found out from {'Node4', 'Node3'} that its ledger of type 1 is latest 2017-10-23 15:56:45,622 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from {'Node2', 'Node3'} to {'Node4', 'Node2', 'Node3'} 2017-10-23 15:56:45,622 | INFO | keep_in_touch.py ( 96) | _connsChanged | Node1 now connected to Node4 2017-10-23 15:56:45,622 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from started_hungry to started 2017-10-23 15:56:45,622 | DEBUG | node.py ( 918) | checkInstances | Node1 choosing to start election on the basis of count 4 and nodes {'Node4', 'Node2', 'Node3'} 2017-10-23 15:56:45,622 | DEBUG | node.py ( 879) | send_current_state_to_lagging_node | Node1 sending current state CURRENT_STATE{'primary': [VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': ((0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')), 'viewNo': 0}], 'viewNo': 0} to lagged node Node4 2017-10-23 15:56:45,622 | DEBUG | node.py (2593) | send | Node1 sending message CURRENT_STATE{'primary': [VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': ((0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'), (1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'), (2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn')), 'viewNo': 0}], 'viewNo': 0} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,623 | DEBUG | node.py (2593) | send | Node1 sending message LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,623 | DEBUG | node.py (2593) | send | Node1 sending message LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,623 | DEBUG | node.py (2593) | send | Node1 sending message LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,626 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 4 msgs to Node4 into one transmission 2017-10-23 15:56:45,626 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"primary":[{"ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"name":"Node1","viewNo":0}],"op":"CURRENT_STATE","viewNo":0}', b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA","txnSeqNo":4,"ledgerId":0}', b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","txnSeqNo":6,"ledgerId":1}', b'{"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","txnSeqNo":0,"ledgerId":2}']) 2017-10-23 15:56:45,626 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node4: b'{"op":"BATCH","messages":["{\\"primary\\":[{\\"ledgerInfo\\":[[0,4,\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\"],[1,6,\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\"],[2,0,\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\"]],\\"name\\":\\"Node1\\",\\"viewNo\\":0}],\\"op\\":\\"CURRENT_STATE\\",\\"viewNo\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\",\\"txnSeqNo\\":4,\\"ledgerId\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"txnSeqNo\\":6,\\"ledgerId\\":1}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"txnSeqNo\\":0,\\"ledgerId\\":2}"],"signature":null}' 2017-10-23 15:56:45,629 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"primary\\":[{\\"ledgerInfo\\":[[0,4,\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\"],[1,6,\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\"],[2,0,\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\"]],\\"name\\":\\"Node1\\",\\"viewNo\\":0}],\\"op\\":\\"CURRENT_STATE\\",\\"viewNo\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA\\",\\"txnSeqNo\\":4,\\"ledgerId\\":0}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"txnSeqNo\\":6,\\"ledgerId\\":1}","{\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"txnSeqNo\\":0,\\"ledgerId\\":2}"],"signature":null}' to Node4 2017-10-23 15:56:45,643 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,644 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:45,644 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'LEDGER_STATUS', 'ppSeqNo': None, 'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ledgerId': 0}, 'Node4') 2017-10-23 15:56:45,644 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} 2017-10-23 15:56:45,644 | DEBUG | ledger_manager.py ( 244) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'viewNo': None, 'merkleRoot': '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA', 'txnSeqNo': 4, 'ppSeqNo': None, 'ledgerId': 0} from Node4 2017-10-23 15:56:45,645 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:45,645 | DEBUG | ledger_manager.py ( 965) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 15:56:45,702 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 15:56:45,702 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"op":"VIEW_CHANGE_DONE","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"name":"Node1"}'], 'signature': None} 2017-10-23 15:56:45,703 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'op': 'BATCH', 'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"op":"VIEW_CHANGE_DONE","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"name":"Node1"}'], 'signature': None}, 'Node4') 2017-10-23 15:56:45,703 | DEBUG | node.py (1328) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":2}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"op":"VIEW_CHANGE_DONE","ledgerInfo":[[0,4,"6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA"],[1,6,"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4"],[2,0,"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn"]],"viewNo":0,"name":"Node1"}'], 'signature': None} 2017-10-23 15:56:45,703 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,703 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node4') 2017-10-23 15:56:45,703 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,703 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,703 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 2}}, 'Node4') 2017-10-23 15:56:45,703 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}} 2017-10-23 15:56:45,704 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,704 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node4') 2017-10-23 15:56:45,704 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,704 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,704 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST', 'params': {'ledgerId': 1}}, 'Node4') 2017-10-23 15:56:45,704 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}} 2017-10-23 15:56:45,704 | DEBUG | node.py (1313) | validateNodeMsg | Node1 received node message from Node4: VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,704 | INFO | node.py (1275) | handleOneNodeMsg | Node1 msg validated ({'name': 'Node1', 'op': 'VIEW_CHANGE_DONE', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0}, 'Node4') 2017-10-23 15:56:45,704 | DEBUG | node.py (1342) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,705 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,705 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'txnSeqNo': 0, 'ppSeqNo': None, 'ledgerId': 2}, 'params': {'ledgerId': 2}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,706 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,706 | DEBUG | node.py (2593) | send | Node1 sending message MESSAGE_RESPONSE{'msg_type': 'LEDGER_STATUS', 'msg': LEDGER_STATUS{'viewNo': None, 'merkleRoot': 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4', 'txnSeqNo': 6, 'ppSeqNo': None, 'ledgerId': 1}, 'params': {'ledgerId': 1}} to 1 recipients: ['Node4'] 2017-10-23 15:56:45,707 | DEBUG | node.py (1261) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0}, 'Node4') 2017-10-23 15:56:45,708 | DEBUG | primary_selector.py ( 103) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node4 : VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} 2017-10-23 15:56:45,708 | DEBUG | message_processor.py ( 28) | discard | Node1 discarding message VIEW_CHANGE_DONE{'name': 'Node1', 'ledgerInfo': [[0, 4, '6mQmSGzvyAeSpp5E7rBcYyAwgim9pTdggDXwL4quw8HA'], [1, 6, 'HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4'], [2, 0, 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn']], 'viewNo': 0} because it already decided primary which is Node1:0 2017-10-23 15:56:45,708 | DEBUG | batched.py ( 89) | flushOutBoxes | Node1 batching 4 msgs to Node4 into one transmission 2017-10-23 15:56:45,708 | TRACE | batched.py ( 90) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn","ppSeqNo":null,"txnSeqNo":0,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4","ppSeqNo":null,"txnSeqNo":6,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 15:56:45,708 | TRACE | batched.py ( 98) | flushOutBoxes | Node1 sending payload to Node4: b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' 2017-10-23 15:56:45,709 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'{"op":"BATCH","messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":0,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"HPjbpQSVy894XiscZWtYK2vYyZishn1idyFLSCAEwr4\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":6,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null}' to Node4 2017-10-23 15:56:46,891 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 1 2017-10-23 15:56:46,891 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:56:46,892 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:56:46,892 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:56:46,892 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:56:46,892 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:56:46,892 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:56:46,892 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 3 to run in 10 seconds 2017-10-23 15:56:51,953 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:56:51,953 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:56:51,953 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:56:51,954 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:56:56,895 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 3 2017-10-23 15:56:56,895 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:56:56,895 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:56:56,895 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:56:56,895 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:56:56,895 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:56:56,895 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:56:56,895 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 4 to run in 10 seconds 2017-10-23 15:57:06,908 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 4 2017-10-23 15:57:06,908 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:06,908 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:06,908 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:06,908 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:06,908 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:06,908 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:06,908 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 5 to run in 10 seconds 2017-10-23 15:57:06,970 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:57:06,970 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:57:06,970 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:57:06,971 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:57:16,921 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 5 2017-10-23 15:57:16,921 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:16,922 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:16,922 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:16,922 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:16,922 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:16,922 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:16,922 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 6 to run in 10 seconds 2017-10-23 15:57:21,979 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:57:21,979 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:57:21,979 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:57:21,979 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:57:26,935 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 6 2017-10-23 15:57:26,935 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:26,935 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:26,935 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:26,935 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:26,935 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:26,935 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:26,935 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 7 to run in 10 seconds 2017-10-23 15:57:36,888 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkNodeRequestSpike with id 2 2017-10-23 15:57:36,888 | DEBUG | node.py (2026) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 15:57:36,888 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:36,888 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 8 to run in 60 seconds 2017-10-23 15:57:36,888 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 1 2017-10-23 15:57:36,888 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 15:57:36,888 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 2 to run in 60 seconds 2017-10-23 15:57:36,936 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 7 2017-10-23 15:57:36,936 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:36,936 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:36,936 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:36,936 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:36,936 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:36,936 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:36,936 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 9 to run in 10 seconds 2017-10-23 15:57:36,981 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:57:36,982 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:57:36,982 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:57:36,982 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:57:46,948 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 9 2017-10-23 15:57:46,948 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:46,948 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:46,948 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:46,948 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:46,948 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:46,948 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:46,948 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 10 to run in 10 seconds 2017-10-23 15:57:51,989 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:57:51,989 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:57:51,990 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:57:51,991 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:57:56,950 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 10 2017-10-23 15:57:56,950 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:57:56,950 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:57:56,950 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:57:56,950 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:57:56,950 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:57:56,950 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:57:56,951 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 11 to run in 10 seconds 2017-10-23 15:58:06,956 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 11 2017-10-23 15:58:06,956 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:06,956 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:06,956 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:06,956 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:06,956 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:06,956 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:06,957 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 12 to run in 10 seconds 2017-10-23 15:58:06,990 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:58:06,990 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:58:06,990 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:58:06,991 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:58:16,958 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 12 2017-10-23 15:58:16,958 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:16,958 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:16,958 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:16,958 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:16,958 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:16,958 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:16,958 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 13 to run in 10 seconds 2017-10-23 15:58:21,997 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:58:21,997 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:58:21,997 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:58:21,997 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:58:26,971 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 13 2017-10-23 15:58:26,971 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:26,971 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:26,971 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:26,971 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:26,971 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:26,971 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:26,971 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 14 to run in 10 seconds 2017-10-23 15:58:36,889 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkNodeRequestSpike with id 8 2017-10-23 15:58:36,890 | DEBUG | node.py (2026) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 15:58:36,890 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:36,890 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 15 to run in 60 seconds 2017-10-23 15:58:36,890 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 2 2017-10-23 15:58:36,890 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 15:58:36,890 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 3 to run in 60 seconds 2017-10-23 15:58:36,981 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 14 2017-10-23 15:58:36,981 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:36,981 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:36,981 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:36,981 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:36,981 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:36,981 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:36,982 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 16 to run in 10 seconds 2017-10-23 15:58:36,999 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:58:36,999 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:58:36,999 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:58:36,999 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:58:46,985 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 16 2017-10-23 15:58:46,985 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:46,985 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:46,985 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:46,986 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:46,986 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:46,986 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:46,986 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 17 to run in 10 seconds 2017-10-23 15:58:52,003 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:58:52,003 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:58:52,003 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:58:52,005 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:58:56,996 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 17 2017-10-23 15:58:56,996 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:58:56,996 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:58:56,996 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:58:56,996 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:58:56,997 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:58:56,997 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:58:56,997 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 18 to run in 10 seconds 2017-10-23 15:59:07,006 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 18 2017-10-23 15:59:07,006 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:59:07,006 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:59:07,006 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:59:07,006 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:59:07,006 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:59:07,006 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:59:07,006 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 19 to run in 10 seconds 2017-10-23 15:59:07,011 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:59:07,011 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:59:07,011 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:59:07,011 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:59:17,009 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 19 2017-10-23 15:59:17,009 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:59:17,010 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:59:17,010 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:59:17,010 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:59:17,010 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:59:17,010 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:59:17,010 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 20 to run in 10 seconds 2017-10-23 15:59:22,022 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:59:22,022 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:59:22,022 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:59:22,023 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:59:27,011 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 20 2017-10-23 15:59:27,011 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:59:27,011 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:59:27,011 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:59:27,011 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:59:27,011 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:59:27,011 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:59:27,011 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 21 to run in 10 seconds 2017-10-23 15:59:36,894 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkNodeRequestSpike with id 15 2017-10-23 15:59:36,895 | DEBUG | node.py (2026) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 15:59:36,895 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:59:36,895 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 22 to run in 60 seconds 2017-10-23 15:59:36,895 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 3 2017-10-23 15:59:36,895 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 15:59:36,895 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 4 to run in 60 seconds 2017-10-23 15:59:37,023 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 21 2017-10-23 15:59:37,024 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 15:59:37,024 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 15:59:37,024 | DEBUG | monitor.py ( 290) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 15:59:37,024 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 15:59:37,024 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 15:59:37,024 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 15:59:37,024 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 23 to run in 10 seconds 2017-10-23 15:59:37,025 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 15:59:37,025 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 15:59:37,025 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 15:59:37,025 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 15:59:41,606 | TRACE | zstack.py ( 472) | _receiveFromListener | Node1C got 1 messages through listener 2017-10-23 15:59:41,606 | DEBUG | zstack.py ( 652) | handlePingPong | Node1C got ping from b'xo5JUY.S$6PWbpmz5XzA wait_for= cb=[_run_until_complete_cb() at /usr/lib/python3.5/asyncio/base_events.py:164]> took 0.318 seconds 2017-10-23 16:05:54,573 | DEBUG | base_events.py ( 681) | create_connection | connect to ('127.0.0.1', 30003) 2017-10-23 16:05:54,574 | DEBUG | base_events.py ( 681) | create_connection | connect to ('127.0.0.1', 30003) 2017-10-23 16:05:54,575 | DEBUG | base_events.py (1270) | _run_once | poll 6.345 ms took 0.010 ms: 2 events 2017-10-23 16:05:54,578 | DEBUG | base_events.py ( 719) | create_connection | connected to 127.0.0.1:'30003': (<_SelectorSocketTransport fd=75 read=polling write=>, ) 2017-10-23 16:05:54,581 | TRACE | has_action_queue.py ( 34) | _schedule | Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv scheduling action partial(_declareTimeoutExceeded) with id 1 to run in 600 seconds 2017-10-23 16:05:54,582 | DEBUG | base_events.py ( 719) | create_connection | connected to 127.0.0.1:'30003': (<_SelectorSocketTransport fd=76 read=polling write=>, ) 2017-10-23 16:05:54,582 | TRACE | has_action_queue.py ( 34) | _schedule | Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv scheduling action partial(_declareTimeoutExceeded) with id 2 to run in 600 seconds 2017-10-23 16:05:57,298 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 66 2017-10-23 16:05:57,299 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 16:05:57,299 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:05:57,299 | TRACE | monitor.py ( 299) | isMasterThroughputTooLow | Node1 master throughput ratio 0.9467634400402085 is acceptable. 2017-10-23 16:05:57,299 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:05:57,299 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:05:57,299 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:05:57,299 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 67 to run in 10 seconds 2017-10-23 16:06:07,234 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:06:07,234 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:06:07,234 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:06:07,235 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:06:07,306 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 67 2017-10-23 16:06:07,306 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 16:06:07,306 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:06:07,306 | TRACE | monitor.py ( 299) | isMasterThroughputTooLow | Node1 master throughput ratio 0.9467634400402085 is acceptable. 2017-10-23 16:06:07,306 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:06:07,306 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:06:07,306 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:06:07,306 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 68 to run in 10 seconds 2017-10-23 16:06:17,350 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 68 2017-10-23 16:06:17,350 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 16:06:17,350 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:06:17,350 | TRACE | monitor.py ( 299) | isMasterThroughputTooLow | Node1 master throughput ratio 0.9467634400402085 is acceptable. 2017-10-23 16:06:17,350 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:06:17,350 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:06:17,361 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:06:17,361 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 69 to run in 10 seconds 2017-10-23 16:06:22,248 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:06:22,248 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:06:22,248 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:06:22,248 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:06:27,368 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 69 2017-10-23 16:06:27,369 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 16:06:27,369 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:06:27,369 | TRACE | monitor.py ( 299) | isMasterThroughputTooLow | Node1 master throughput ratio 0.9467634400402085 is acceptable. 2017-10-23 16:06:27,369 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:06:27,369 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:06:27,369 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:06:27,369 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 70 to run in 10 seconds 2017-10-23 16:06:35,620 | TRACE | remote.py ( 117) | hasLostConnection | Remote Node3:HA(host='10.0.0.4', port=9705) has monitor events: [512, 4] 2017-10-23 16:06:35,620 | DEBUG | remote.py ( 121) | hasLostConnection | Node3:HA(host='10.0.0.4', port=9705) found disconnected event on monitor 2017-10-23 16:06:35,620 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from {'Node4', 'Node2', 'Node3'} to {'Node4', 'Node2'} 2017-10-23 16:06:35,620 | INFO | keep_in_touch.py ( 92) | _connsChanged | Node1 disconnected from Node3 2017-10-23 16:06:35,621 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from started to started_hungry 2017-10-23 16:06:35,621 | DEBUG | node.py ( 918) | checkInstances | Node1 choosing to start election on the basis of count 3 and nodes {'Node4', 'Node2'} 2017-10-23 16:06:36,462 | TRACE | remote.py ( 117) | hasLostConnection | Remote Node2:HA(host='10.0.0.3', port=9703) has monitor events: [512, 4] 2017-10-23 16:06:36,466 | DEBUG | remote.py ( 121) | hasLostConnection | Node2:HA(host='10.0.0.3', port=9703) found disconnected event on monitor 2017-10-23 16:06:36,466 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from {'Node4', 'Node2'} to {'Node4'} 2017-10-23 16:06:36,466 | INFO | keep_in_touch.py ( 92) | _connsChanged | Node1 disconnected from Node2 2017-10-23 16:06:36,467 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from started_hungry to starting 2017-10-23 16:06:36,963 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkNodeRequestSpike with id 64 2017-10-23 16:06:36,963 | DEBUG | node.py (2026) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:06:36,963 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:06:36,963 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 71 to run in 60 seconds 2017-10-23 16:06:36,963 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 10 2017-10-23 16:06:36,963 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:06:36,963 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 11 to run in 60 seconds 2017-10-23 16:06:37,256 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:06:37,257 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:06:37,257 | DEBUG | kit_zstack.py ( 65) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:06:37,257 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 16:06:37,257 | DEBUG | zstack.py ( 643) | sendPingPong | Node1 will be sending in batch 2017-10-23 16:06:37,257 | DEBUG | kit_zstack.py ( 47) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:06:37,271 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'pi' to Node3 2017-10-23 16:06:37,271 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'pi' to Node3 2017-10-23 16:06:37,271 | DEBUG | zstack.py ( 699) | transmit | Node1 transmitting message b'pi' to Node2 2017-10-23 16:06:37,272 | TRACE | batched.py ( 85) | flushOutBoxes | Node1 sending msg b'pi' to Node2 2017-10-23 16:06:37,379 | TRACE | has_action_queue.py ( 64) | _serviceActions | Node1 running action checkPerformance with id 70 2017-10-23 16:06:37,379 | TRACE | node.py (2005) | checkPerformance | Node1 checking its performance 2017-10-23 16:06:37,379 | DEBUG | notifier_plugin_manager.py ( 65) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:06:37,380 | TRACE | monitor.py ( 299) | isMasterThroughputTooLow | Node1 master throughput ratio 0.9467634400402085 is acceptable. 2017-10-23 16:06:37,380 | TRACE | monitor.py ( 315) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:06:37,380 | TRACE | monitor.py ( 345) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:06:37,380 | DEBUG | node.py (2022) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:06:37,380 | TRACE | has_action_queue.py ( 34) | _schedule | Node1 scheduling action checkPerformance with id 72 to run in 10 seconds 2017-10-23 16:06:46,821 | DEBUG | node_runner.py ( 18) | run_node | You can find logs in /home/sovrin/.sovrin/Node1.log 2017-10-23 16:06:46,822 | DEBUG | node_runner.py ( 21) | run_node | Sovrin related env vars: [] 2017-10-23 16:06:50,520 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: git 2017-10-23 16:06:50,567 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: hg 2017-10-23 16:06:50,669 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: svn 2017-10-23 16:06:50,669 | DEBUG | __init__.py ( 60) | register | Registered VCS backend: bzr 2017-10-23 16:06:51,259 | DEBUG | selector_events.py ( 53) | __init__ | Using selector: EpollSelector 2017-10-23 16:06:51,259 | DEBUG | looper.py ( 125) | __init__ | Setting handler for SIGINT 2017-10-23 16:06:51,393 | DEBUG | ledger.py ( 200) | start | Starting ledger... 2017-10-23 16:06:51,433 | DEBUG | ledger.py ( 72) | recoverTree | Recovering tree from hash store of size 7 2017-10-23 16:06:51,434 | DEBUG | ledger.py ( 82) | recoverTree | Recovered tree in 0.0008222420001402497 seconds 2017-10-23 16:06:51,774 | DEBUG | idr_cache.py ( 25) | __init__ | Initializing identity cache Node1 2017-10-23 16:06:51,818 | INFO | node.py (2420) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 16:06:51,910 | DEBUG | ledger.py ( 200) | start | Starting ledger... 2017-10-23 16:06:51,934 | DEBUG | ledger.py ( 72) | recoverTree | Recovering tree from hash store of size 4 2017-10-23 16:06:51,934 | DEBUG | ledger.py ( 82) | recoverTree | Recovered tree in 0.00047500990331172943 seconds 2017-10-23 16:06:51,934 | INFO | node.py (2420) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 16:06:52,069 | DEBUG | plugin_loader.py ( 96) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 16:06:52,069 | DEBUG | plugin_loader.py ( 96) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 16:06:52,069 | DEBUG | plugin_loader.py ( 96) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 16:06:52,069 | DEBUG | plugin_loader.py ( 96) | _load | skipping plugin plugin_firebase_stats_consumer[class: typing.Dict<~KT, ~VT>] because it does not have a 'pluginType' attribute 2017-10-23 16:06:52,070 | DEBUG | plugin_loader.py ( 96) | _load | skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute 2017-10-23 16:06:52,070 | INFO | plugin_loader.py ( 117) | _load | plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer 2017-10-23 16:06:52,078 | DEBUG | has_action_queue.py ( 79) | startRepeating | checkPerformance will be repeating every 60 seconds 2017-10-23 16:06:52,078 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 1 to run in 60 seconds 2017-10-23 16:06:52,079 | DEBUG | replica.py ( 313) | h | Node1:0 set watermarks as 0 300 2017-10-23 16:06:52,079 | DISPLAY | replicas.py ( 36) | grow | Node1 added replica Node1:0 to instance 0 (master) 2017-10-23 16:06:52,079 | DEBUG | replica.py ( 313) | h | Node1:1 set watermarks as 0 300 2017-10-23 16:06:52,079 | DISPLAY | replicas.py ( 36) | grow | Node1 added replica Node1:1 to instance 1 (backup) 2017-10-23 16:06:52,079 | DEBUG | has_action_queue.py ( 79) | startRepeating | checkPerformance will be repeating every 10 seconds 2017-10-23 16:06:52,079 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 1 to run in 10 seconds 2017-10-23 16:06:52,079 | DEBUG | has_action_queue.py ( 79) | startRepeating | checkNodeRequestSpike will be repeating every 60 seconds 2017-10-23 16:06:52,080 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 2 to run in 60 seconds 2017-10-23 16:06:52,080 | DEBUG | plugin_helper.py ( 24) | loadPlugins | Plugin loading started to load plugins from basedir: /home/sovrin/.sovrin 2017-10-23 16:06:52,080 | DEBUG | plugin_helper.py ( 68) | loadPlugins | Total plugins loaded from basedir /home/sovrin/.sovrin are : 0 2017-10-23 16:06:52,080 | DEBUG | node.py ( 340) | __init__ | total plugins loaded in node: 0 2017-10-23 16:06:52,144 | DEBUG | ledger.py ( 200) | start | Starting ledger... 2017-10-23 16:06:52,165 | DEBUG | ledger.py ( 72) | recoverTree | Recovering tree from hash store of size 1 2017-10-23 16:06:52,166 | DEBUG | ledger.py ( 82) | recoverTree | Recovered tree in 0.0004597869701683521 seconds 2017-10-23 16:06:52,195 | WARNING | upgrader.py ( 126) | check_upgrade_succeeded | Upgrade for node Node1 was not scheduled. Last event is scheduled:2017-10-17 11:20:00+00:00:1.1.37:None 2017-10-23 16:06:52,195 | INFO | node.py (2420) | initStateFromLedger | Node1 found state to be empty, recreating from ledger 2017-10-23 16:06:52,195 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from stopped to starting 2017-10-23 16:06:52,196 | DEBUG | ledger.py ( 198) | start | Ledger already started. 2017-10-23 16:06:52,196 | DEBUG | ledger.py ( 198) | start | Ledger already started. 2017-10-23 16:06:52,196 | DEBUG | ledger.py ( 198) | start | Ledger already started. 2017-10-23 16:06:52,196 | DEBUG | zstack.py ( 319) | start | Node1 starting with restricted as True and reSetupAuth as True 2017-10-23 16:06:52,196 | DEBUG | authenticator.py ( 31) | start | Starting ZAP at inproc://zeromq.zap.1 2017-10-23 16:06:52,196 | DEBUG | base.py ( 72) | allow | Allowing 0.0.0.0 2017-10-23 16:06:52,197 | DEBUG | base.py ( 112) | configure_curve | Configure curve: *[/home/sovrin/.sovrin/Node1/public_keys] 2017-10-23 16:06:52,197 | DEBUG | zstack.py ( 347) | open | Node1 will bind its listener at 9701 2017-10-23 16:06:52,197 | INFO | stacks.py ( 84) | start | CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701 2017-10-23 16:06:52,198 | DEBUG | zstack.py ( 319) | start | Node1C starting with restricted as False and reSetupAuth as True 2017-10-23 16:06:52,198 | DEBUG | authenticator.py ( 31) | start | Starting ZAP at inproc://zeromq.zap.2 2017-10-23 16:06:52,198 | DEBUG | base.py ( 72) | allow | Allowing 0.0.0.0 2017-10-23 16:06:52,198 | DEBUG | base.py ( 112) | configure_curve | Configure curve: *[*] 2017-10-23 16:06:52,198 | DEBUG | zstack.py ( 347) | open | Node1C will bind its listener at 9702 2017-10-23 16:06:52,198 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action propose_view_change with id 3 to run in 60 seconds 2017-10-23 16:06:52,198 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 4 to run in 3 seconds 2017-10-23 16:06:52,199 | DEBUG | has_action_queue.py ( 79) | startRepeating | dump_json_file will be repeating every 60 seconds 2017-10-23 16:06:52,199 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 5 to run in 60 seconds 2017-10-23 16:06:52,199 | INFO | node.py ( 621) | start | Node1 first time running... 2017-10-23 16:06:52,199 | DEBUG | kit_zstack.py ( 97) | connectToMissing | CONNECTION: Node1 found the following missing connections: Node3, Node4, Node2 2017-10-23 16:06:52,200 | TRACE | remote.py ( 86) | connect | connecting socket 78 45796256 to remote Node3:HA(host='10.0.0.4', port=9705) 2017-10-23 16:06:52,200 | INFO | zstack.py ( 590) | connect | CONNECTION: Node1 looking for Node3 at 10.0.0.4:9705 2017-10-23 16:06:52,212 | DEBUG | zstack.py ( 645) | sendPingPong | Node1 pinged Node3 2017-10-23 16:06:52,212 | TRACE | remote.py ( 86) | connect | connecting socket 81 45900784 to remote Node4:HA(host='10.0.0.5', port=9707) 2017-10-23 16:06:52,213 | INFO | zstack.py ( 590) | connect | CONNECTION: Node1 looking for Node4 at 10.0.0.5:9707 2017-10-23 16:06:52,213 | DEBUG | zstack.py ( 645) | sendPingPong | Node1 pinged Node4 2017-10-23 16:06:52,213 | TRACE | remote.py ( 86) | connect | connecting socket 82 45938064 to remote Node2:HA(host='10.0.0.3', port=9703) 2017-10-23 16:06:52,213 | INFO | zstack.py ( 590) | connect | CONNECTION: Node1 looking for Node2 at 10.0.0.3:9703 2017-10-23 16:06:52,213 | DEBUG | zstack.py ( 645) | sendPingPong | Node1 pinged Node2 2017-10-23 16:06:52,213 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:06:52,225 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'pi' to Node3 2017-10-23 16:06:52,225 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'pi' to Node3 2017-10-23 16:06:52,225 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'pi' to Node2 2017-10-23 16:06:52,225 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'pi' to Node2 2017-10-23 16:06:52,225 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'pi' to Node4 2017-10-23 16:06:52,225 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'pi' to Node4 2017-10-23 16:06:52,308 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1C got 1 messages through listener 2017-10-23 16:06:52,308 | DEBUG | zstack.py ( 663) | handlePingPong | Node1C got ping from b'xo5JUY.S$6PWbpmz5XzA processing config ledger for any POOL_CONFIGs 2017-10-23 16:06:52,482 | DEBUG | upgrader.py ( 208) | processLedger | Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv processing config ledger for any upgrades 2017-10-23 16:06:52,482 | INFO | upgrader.py ( 214) | processLedger | Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv found upgrade START txn OrderedDict([('action', 'start'), ('force', True), ('identifier', 'V4SGRU86Z58d6TV7PBUe6f'), ('justification', None), ('name', 'upgrade-1137'), ('reqId', 1508774754234674), ('schedule', OrderedDict([('4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA', '2017-10-17T11:35:00.000000+00:00'), ('4Tn3wZMNCvhSTXPcLinQDnHyj56DTLQtL61ki4jo2Loc', '2017-10-17T11:40:00.000000+00:00'), ('8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb', '2017-10-17T11:25:00.000000+00:00'), ('DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya', '2017-10-17T11:30:00.000000+00:00'), ('Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv', '2017-10-17T11:20:00.000000+00:00')])), ('sha256', 'f6f2ea8f45d8a057c9566a33f99474da2e5c6a6604d736121650e2730c6fb0a3'), ('signature', '3R1H8sKSqCKxj3VJxoU8RUbFt9RhEzjYniG5afzcxkzxdogseFQ866SgmtmZBvVRzvV6G88JcVmQzf6dMH4XmJzQ'), ('timeout', 10), ('txnTime', 1508774754), ('type', '109'), ('version', '1.1.37'), ('seqNo', 1)]) 2017-10-23 16:06:52,482 | INFO | upgrader.py ( 292) | handleUpgradeTxn | Node 'Node1' handles upgrade txn OrderedDict([('action', 'start'), ('force', True), ('identifier', 'V4SGRU86Z58d6TV7PBUe6f'), ('justification', None), ('name', 'upgrade-1137'), ('reqId', 1508774754234674), ('schedule', OrderedDict([('4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA', '2017-10-17T11:35:00.000000+00:00'), ('4Tn3wZMNCvhSTXPcLinQDnHyj56DTLQtL61ki4jo2Loc', '2017-10-17T11:40:00.000000+00:00'), ('8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb', '2017-10-17T11:25:00.000000+00:00'), ('DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya', '2017-10-17T11:30:00.000000+00:00'), ('Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv', '2017-10-17T11:20:00.000000+00:00')])), ('sha256', 'f6f2ea8f45d8a057c9566a33f99474da2e5c6a6604d736121650e2730c6fb0a3'), ('signature', '3R1H8sKSqCKxj3VJxoU8RUbFt9RhEzjYniG5afzcxkzxdogseFQ866SgmtmZBvVRzvV6G88JcVmQzf6dMH4XmJzQ'), ('timeout', 10), ('txnTime', 1508774754), ('type', '109'), ('version', '1.1.37'), ('seqNo', 1)]) 2017-10-23 16:06:52,482 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} to all recipients: ['Node3', 'Node2', 'Node4'] 2017-10-23 16:06:52,482 | DEBUG | node.py ( 855) | _ask_for_ledger_status | Node1 asking Node1 for ledger status of ledger 1 2017-10-23 16:06:52,483 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node2'] 2017-10-23 16:06:52,483 | DEBUG | node.py ( 855) | _ask_for_ledger_status | Node1 asking Node2 for ledger status of ledger 1 2017-10-23 16:06:52,483 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node3'] 2017-10-23 16:06:52,483 | DEBUG | node.py ( 855) | _ask_for_ledger_status | Node1 asking Node3 for ledger status of ledger 1 2017-10-23 16:06:52,483 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node4'] 2017-10-23 16:06:52,483 | DEBUG | node.py ( 855) | _ask_for_ledger_status | Node1 asking Node4 for ledger status of ledger 1 2017-10-23 16:06:52,483 | DEBUG | ledger_manager.py (1027) | processStashedLedgerStatuses | Node1 going to process 2 stashed ledger statuses for ledger 1 2017-10-23 16:06:52,483 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node2 2017-10-23 16:06:52,483 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node3 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node3', 'Node2'} that its ledger of type 1 is latest 2017-10-23 16:06:52,484 | DEBUG | ledger_manager.py ( 315) | processLedgerStatus | Node1 found from ledger status LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} that it does not need catchup 2017-10-23 16:06:52,484 | DEBUG | node.py (1479) | preLedgerCatchUp | Node1 going to process any ordered requests before starting catchup. 2017-10-23 16:06:52,484 | DEBUG | replica.py (2259) | _remove_ordered_from_queue | Node1:0 going to remove 0 Ordered messages from outbox 2017-10-23 16:06:52,484 | DEBUG | node.py (1888) | force_process_ordered | Node1 processed 0 Ordered batches for instance 0 before starting catch up 2017-10-23 16:06:52,484 | DEBUG | replica.py (2259) | _remove_ordered_from_queue | Node1:1 going to remove 0 Ordered messages from outbox 2017-10-23 16:06:52,484 | DEBUG | node.py (1888) | force_process_ordered | Node1 processed 0 Ordered batches for instance 1 before starting catch up 2017-10-23 16:06:52,484 | DEBUG | node.py (2469) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 16:06:52,485 | DEBUG | monitor.py ( 192) | reset | Node1's Monitor being reset 2017-10-23 16:06:52,485 | INFO | node.py (1489) | preLedgerCatchUp | Node1 reverted 0 batches before starting catch up for ledger 1 2017-10-23 16:06:52,485 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,485 | INFO | ledger_manager.py ( 848) | catchupCompleted | CATCH-UP: Node1 completed catching up ledger 1, caught up 0 in total 2017-10-23 16:06:52,485 | DEBUG | node.py (1609) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 16:06:52,485 | DEBUG | node.py (2469) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 16:06:52,485 | DEBUG | monitor.py ( 192) | reset | Node1's Monitor being reset 2017-10-23 16:06:52,485 | DEBUG | primary_selector.py ( 190) | _hasViewChangeQuorum | Node1 needs 2 ViewChangeDone messages 2017-10-23 16:06:52,485 | DEBUG | node.py (1568) | caught_up_for_current_view | Node1 does not have view change quorum for view 0 2017-10-23 16:06:52,485 | DEBUG | node.py (1552) | is_catchup_needed | Node1 is not caught up for the current view 0 2017-10-23 16:06:52,485 | DEBUG | node.py (1609) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 16:06:52,486 | DEBUG | node.py (1557) | is_catchup_needed | Node1 ordered till last prepared certificate 2017-10-23 16:06:52,486 | INFO | node.py (1537) | allLedgersCaughtUp | CATCH-UP: Node1 does not need any more catchups 2017-10-23 16:06:52,486 | DEBUG | primary_decider.py ( 134) | send | Node1's elector sending VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')], 'name': 'Node1'} 2017-10-23 16:06:52,486 | DEBUG | primary_selector.py ( 190) | _hasViewChangeQuorum | Node1 needs 1 ViewChangeDone messages 2017-10-23 16:06:52,486 | DEBUG | primary_selector.py ( 266) | _startSelection | Node1 cannot start primary selection found failure in primary verification. This can happen due to lack of appropriate ViewChangeDone messages 2017-10-23 16:06:52,490 | INFO | upgrader.py ( 150) | should_notify_about_upgrade_result | Node's 'Node1' last upgrade txn is None 2017-10-23 16:06:52,491 | INFO | ledger_manager.py ( 848) | catchupCompleted | CATCH-UP: Node1 completed catching up ledger 2, caught up 0 in total 2017-10-23 16:06:52,491 | DEBUG | node.py (1609) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 16:06:52,491 | DEBUG | node.py (2469) | processStashedOrderedReqs | Node1 processed 0 stashed ordered requests 2017-10-23 16:06:52,491 | DEBUG | monitor.py ( 192) | reset | Node1's Monitor being reset 2017-10-23 16:06:52,491 | DEBUG | primary_selector.py ( 190) | _hasViewChangeQuorum | Node1 needs 1 ViewChangeDone messages 2017-10-23 16:06:52,491 | DEBUG | node.py (1568) | caught_up_for_current_view | Node1 does not have view change quorum for view 0 2017-10-23 16:06:52,491 | DEBUG | node.py (1552) | is_catchup_needed | Node1 is not caught up for the current view 0 2017-10-23 16:06:52,491 | DEBUG | node.py (1609) | num_txns_caught_up_in_last_catchup | Node1 caught up to 0 txns in the last catchup 2017-10-23 16:06:52,492 | DEBUG | node.py (1557) | is_catchup_needed | Node1 ordered till last prepared certificate 2017-10-23 16:06:52,492 | INFO | node.py (1537) | allLedgersCaughtUp | CATCH-UP: Node1 does not need any more catchups 2017-10-23 16:06:52,492 | DEBUG | primary_decider.py ( 134) | send | Node1's elector sending VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')], 'name': 'Node1'} 2017-10-23 16:06:52,492 | DEBUG | primary_selector.py ( 190) | _hasViewChangeQuorum | Node1 needs 1 ViewChangeDone messages 2017-10-23 16:06:52,492 | DEBUG | primary_selector.py ( 266) | _startSelection | Node1 cannot start primary selection found failure in primary verification. This can happen due to lack of appropriate ViewChangeDone messages 2017-10-23 16:06:52,493 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None} from Node2 2017-10-23 16:06:52,493 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:52,493 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:52,493 | DEBUG | node.py (2608) | send | Node1 sending message VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')], 'name': 'Node1'} to all recipients: ['Node3', 'Node2', 'Node4'] 2017-10-23 16:06:52,494 | DEBUG | node.py (2608) | send | Node1 sending message VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')], 'name': 'Node1'} to all recipients: ['Node3', 'Node2', 'Node4'] 2017-10-23 16:06:52,494 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 4 msgs to Node3 into one transmission 2017-10-23 16:06:52,494 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}']) 2017-10-23 16:06:52,494 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node3: b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:52,494 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' to Node3 2017-10-23 16:06:52,495 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 4 msgs to Node2 into one transmission 2017-10-23 16:06:52,495 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}']) 2017-10-23 16:06:52,495 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node2: b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:52,495 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' to Node2 2017-10-23 16:06:52,495 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 4 msgs to Node4 into one transmission 2017-10-23 16:06:52,495 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"msg_type":"LEDGER_STATUS","params":{"ledgerId":1},"op":"MESSAGE_REQUEST"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}', b'{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}']) 2017-10-23 16:06:52,496 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node4: b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:52,496 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_REQUEST\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}","{\\"viewNo\\":0,\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"op\\":\\"VIEW_CHANGE_DONE\\"}"],"signature":null,"op":"BATCH"}' to Node4 2017-10-23 16:06:52,496 | DEBUG | zstack.py ( 728) | transmit | Remote Node4 is not connected - message will not be sent immediately.If this problem does not resolve itself - check your firewall settings 2017-10-23 16:06:52,496 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:52,497 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 16:06:52,497 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None, 'op': 'BATCH'}, 'Node3') 2017-10-23 16:06:52,497 | DEBUG | node.py (1272) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_REQUEST","params":{"ledgerId":1}}'], 'signature': None} 2017-10-23 16:06:52,497 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,497 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST'}, 'Node3') 2017-10-23 16:06:52,497 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,497 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,497 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_REQUEST'}, 'Node3') 2017-10-23 16:06:52,497 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,498 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node3'] 2017-10-23 16:06:52,498 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node3'] 2017-10-23 16:06:52,498 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 2 msgs to Node3 into one transmission 2017-10-23 16:06:52,498 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 16:06:52,499 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node3: b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:52,503 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null,"op":"BATCH"}' to Node3 2017-10-23 16:06:52,503 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:52,504 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node2: VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,504 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1', 'op': 'VIEW_CHANGE_DONE'}, 'Node2') 2017-10-23 16:06:52,504 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,505 | DEBUG | node.py (1205) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}, 'Node2') 2017-10-23 16:06:52,505 | DEBUG | primary_selector.py ( 106) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node2 : VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,505 | DEBUG | primary_selector.py ( 196) | _hasViewChangeQuorum | Node1 got view change quorum (2 >= 2) 2017-10-23 16:06:52,505 | DEBUG | primary_selector.py ( 215) | has_view_change_from_primary | Node1 received ViewChangeDone from primary Node1 2017-10-23 16:06:52,505 | DEBUG | primary_selector.py ( 238) | has_sufficient_same_view_change_done_messages | Node1 found acceptable primary Node1 and ledger info ((0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')) 2017-10-23 16:06:52,506 | DEBUG | primary_selector.py ( 281) | _startSelection | Node1 starting selection 2017-10-23 16:06:52,506 | DISPLAY | primary_selector.py ( 291) | _startSelection | PRIMARY SELECTION: Node1:0 selected primary Node1:0 for instance 0 (view 0) 2017-10-23 16:06:52,506 | INFO | node.py ( 510) | start_participating | Node1 started participating 2017-10-23 16:06:52,506 | DEBUG | replica.py ( 408) | primaryName | Node1:0 setting primaryName for view no 0 to: Node1:0 2017-10-23 16:06:52,506 | DEBUG | replica.py (1694) | _gc | Node1:0 cleaning up till (0, 0) 2017-10-23 16:06:52,506 | DEBUG | replica.py (1709) | _gc | Node1:0 found 0 3-phase keys to clean 2017-10-23 16:06:52,506 | DEBUG | replica.py (1711) | _gc | Node1:0 found 0 request keys to clean 2017-10-23 16:06:52,506 | DEBUG | replica.py ( 313) | h | Node1:0 set watermarks as 0 300 2017-10-23 16:06:52,507 | DISPLAY | primary_selector.py ( 315) | _startSelection | VIEW CHANGE: Node1:0 declares view change 0 as completed for instance 0, new primary is Node1:0, ledger info is [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')] 2017-10-23 16:06:52,507 | DISPLAY | primary_selector.py ( 291) | _startSelection | PRIMARY SELECTION: Node1:1 selected primary Node2:1 for instance 1 (view 0) 2017-10-23 16:06:52,507 | DEBUG | replica.py ( 408) | primaryName | Node1:1 setting primaryName for view no 0 to: Node2:1 2017-10-23 16:06:52,507 | DEBUG | replica.py (1694) | _gc | Node1:1 cleaning up till (0, 0) 2017-10-23 16:06:52,507 | DEBUG | replica.py (1709) | _gc | Node1:1 found 0 3-phase keys to clean 2017-10-23 16:06:52,507 | DEBUG | replica.py (1711) | _gc | Node1:1 found 0 request keys to clean 2017-10-23 16:06:52,507 | DEBUG | replica.py ( 313) | h | Node1:1 set watermarks as 0 300 2017-10-23 16:06:52,507 | DEBUG | replica.py ( 514) | _setup_for_non_master | Node1:1 Setting last ordered for non-master as (0, 0) 2017-10-23 16:06:52,507 | DEBUG | replica.py ( 323) | last_ordered_3pc | Node1:1 set last ordered as (0, 0) 2017-10-23 16:06:52,508 | DISPLAY | primary_selector.py ( 315) | _startSelection | VIEW CHANGE: Node1:1 declares view change 0 as completed for instance 1, new primary is Node2:1, ledger info is [(0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')] 2017-10-23 16:06:52,519 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:52,520 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None} 2017-10-23 16:06:52,520 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None, 'op': 'BATCH'}, 'Node3') 2017-10-23 16:06:52,520 | DEBUG | node.py (1272) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"msg_type":"LEDGER_STATUS","op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None} 2017-10-23 16:06:52,520 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,520 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 16:06:52,520 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,520 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,520 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS', 'op': 'MESSAGE_RESPONSE'}, 'Node3') 2017-10-23 16:06:52,521 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,521 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node3 2017-10-23 16:06:52,521 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,521 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,521 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node3 2017-10-23 16:06:52,521 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,522 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,533 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:52,534 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node2: BATCH{'messages': ['{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}', '{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}'], 'signature': None} 2017-10-23 16:06:52,534 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'messages': ['{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}', '{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}'], 'signature': None, 'op': 'BATCH'}, 'Node2') 2017-10-23 16:06:52,534 | DEBUG | node.py (1272) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}', '{"params":{"ledgerId":1},"msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"op":"MESSAGE_RESPONSE","msg_type":"LEDGER_STATUS"}'], 'signature': None} 2017-10-23 16:06:52,534 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,534 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 16:06:52,534 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,535 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node2: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,535 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node2') 2017-10-23 16:06:52,535 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:52,535 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node2 2017-10-23 16:06:52,535 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,535 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,536 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node3', 'Node2'} that its ledger of type 1 is latest 2017-10-23 16:06:52,536 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node2 2017-10-23 16:06:52,536 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,536 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:52,536 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node3', 'Node2'} that its ledger of type 1 is latest 2017-10-23 16:06:52,559 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:52,560 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node3: VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,560 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'viewNo': 0, 'op': 'VIEW_CHANGE_DONE', 'name': 'Node1', 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']]}, 'Node3') 2017-10-23 16:06:52,560 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,560 | DEBUG | node.py (1205) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}, 'Node3') 2017-10-23 16:06:52,560 | DEBUG | primary_selector.py ( 106) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node3 : VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:52,561 | DEBUG | message_processor.py ( 29) | discard | Node1 discarding message VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} because it already decided primary which is Node1:0 2017-10-23 16:06:53,769 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:53,770 | DEBUG | zstack.py ( 663) | handlePingPong | Node1 got ping from Node4 2017-10-23 16:06:53,770 | DEBUG | zstack.py ( 645) | sendPingPong | Node1 ponged Node4 2017-10-23 16:06:53,770 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'po' to Node4 2017-10-23 16:06:53,770 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'po' to Node4 2017-10-23 16:06:53,829 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:53,829 | DEBUG | zstack.py ( 669) | handlePingPong | Node1 got pong from Node4 2017-10-23 16:06:53,830 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: BATCH{'messages': ['{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None} 2017-10-23 16:06:53,830 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'messages': ['{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None, 'op': 'BATCH'}, 'Node4') 2017-10-23 16:06:53,830 | DEBUG | node.py (1272) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}', '{"op":"MESSAGE_RESPONSE","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null}}'], 'signature': None} 2017-10-23 16:06:53,830 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,830 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 16:06:53,830 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,830 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'params': {'ledgerId': 2}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 16:06:53,831 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 16:06:53,831 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'params': {'ledgerId': 1}, 'op': 'MESSAGE_RESPONSE'}, 'Node4') 2017-10-23 16:06:53,831 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': {'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,831 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None} from Node4 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node4', 'Node2'} that its ledger of type 2 is latest 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None} from Node4 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 2 of size 1 with 1 2017-10-23 16:06:53,832 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node4', 'Node2'} that its ledger of type 2 is latest 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node4 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node3', 'Node4', 'Node2'} that its ledger of type 1 is latest 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} from Node4 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 1 of size 7 with 7 2017-10-23 16:06:53,833 | DEBUG | ledger_manager.py ( 312) | processLedgerStatus | Node1 found out from {'Node3', 'Node4', 'Node2'} that its ledger of type 1 is latest 2017-10-23 16:06:53,834 | DEBUG | keep_in_touch.py ( 68) | conns | Node1's connections changed from {'Node3', 'Node2'} to {'Node3', 'Node4', 'Node2'} 2017-10-23 16:06:53,834 | INFO | keep_in_touch.py ( 98) | _connsChanged | CONNECTION: Node1 now connected to Node4 2017-10-23 16:06:53,834 | DEBUG | motor.py ( 34) | set_status | Node1 changing status from started_hungry to started 2017-10-23 16:06:53,834 | DEBUG | node.py ( 941) | checkInstances | Node1 choosing to start election on the basis of count 4 and nodes {'Node3', 'Node4', 'Node2'} 2017-10-23 16:06:53,835 | DEBUG | node.py ( 902) | send_current_state_to_lagging_node | Node1 sending current state CURRENT_STATE{'primary': [VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': ((0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')), 'name': 'Node1'}], 'viewNo': 0} to lagged node Node4 2017-10-23 16:06:53,835 | DEBUG | node.py (2608) | send | Node1 sending message CURRENT_STATE{'primary': [VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': ((0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'), (1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'), (2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF')), 'name': 'Node1'}], 'viewNo': 0} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,835 | DEBUG | node.py (2608) | send | Node1 sending message LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 4, 'ledgerId': 0, 'merkleRoot': '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3', 'viewNo': None} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,836 | DEBUG | node.py (2608) | send | Node1 sending message LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,836 | DEBUG | node.py (2608) | send | Node1 sending message LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,836 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 4 msgs to Node4 into one transmission 2017-10-23 16:06:53,837 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"primary":[{"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","viewNo":0}],"viewNo":0,"op":"CURRENT_STATE"}', b'{"txnSeqNo":4,"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"ledgerId":0,"merkleRoot":"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"}', b'{"txnSeqNo":7,"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"}', b'{"txnSeqNo":1,"op":"LEDGER_STATUS","ppSeqNo":null,"viewNo":null,"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"}']) 2017-10-23 16:06:53,837 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node4: b'{"messages":["{\\"primary\\":[{\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"viewNo\\":0}],\\"viewNo\\":0,\\"op\\":\\"CURRENT_STATE\\"}","{\\"txnSeqNo\\":4,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":0,\\"merkleRoot\\":\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"}","{\\"txnSeqNo\\":7,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"}","{\\"txnSeqNo\\":1,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:53,837 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"primary\\":[{\\"ledgerInfo\\":[[0,4,\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"],[1,7,\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"],[2,1,\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"]],\\"name\\":\\"Node1\\",\\"viewNo\\":0}],\\"viewNo\\":0,\\"op\\":\\"CURRENT_STATE\\"}","{\\"txnSeqNo\\":4,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":0,\\"merkleRoot\\":\\"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3\\"}","{\\"txnSeqNo\\":7,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\"}","{\\"txnSeqNo\\":1,\\"op\\":\\"LEDGER_STATUS\\",\\"ppSeqNo\\":null,\\"viewNo\\":null,\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\"}"],"signature":null,"op":"BATCH"}' to Node4 2017-10-23 16:06:53,863 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1 got 1 messages through listener 2017-10-23 16:06:53,864 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: BATCH{'messages': ['{"viewNo":0,"primary":[{"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","viewNo":0}],"op":"CURRENT_STATE"}', '{"merkleRoot":"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3","txnSeqNo":4,"viewNo":null,"op":"LEDGER_STATUS","ledgerId":0,"ppSeqNo":null}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}'], 'signature': None} 2017-10-23 16:06:53,864 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'messages': ['{"viewNo":0,"primary":[{"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","viewNo":0}],"op":"CURRENT_STATE"}', '{"merkleRoot":"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3","txnSeqNo":4,"viewNo":null,"op":"LEDGER_STATUS","ledgerId":0,"ppSeqNo":null}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}'], 'signature': None, 'op': 'BATCH'}, 'Node4') 2017-10-23 16:06:53,864 | DEBUG | node.py (1272) | unpackNodeMsg | Node1 processing a batch BATCH{'messages': ['{"viewNo":0,"primary":[{"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","viewNo":0}],"op":"CURRENT_STATE"}', '{"merkleRoot":"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3","txnSeqNo":4,"viewNo":null,"op":"LEDGER_STATUS","ledgerId":0,"ppSeqNo":null}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":2},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"op":"MESSAGE_REQUEST","params":{"ledgerId":1},"msg_type":"LEDGER_STATUS"}', '{"viewNo":0,"ledgerInfo":[[0,4,"5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3"],[1,7,"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o"],[2,1,"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF"]],"name":"Node1","op":"VIEW_CHANGE_DONE"}'], 'signature': None} 2017-10-23 16:06:53,864 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: CURRENT_STATE{'primary': [{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}], 'viewNo': 0} 2017-10-23 16:06:53,864 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'primary': [{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}], 'viewNo': 0, 'op': 'CURRENT_STATE'}, 'Node4') 2017-10-23 16:06:53,864 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox CURRENT_STATE{'primary': [{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}], 'viewNo': 0} 2017-10-23 16:06:53,865 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 4, 'ledgerId': 0, 'merkleRoot': '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3', 'viewNo': None} 2017-10-23 16:06:53,865 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'txnSeqNo': 4, 'op': 'LEDGER_STATUS', 'ppSeqNo': None, 'viewNo': None, 'ledgerId': 0, 'merkleRoot': '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'}, 'Node4') 2017-10-23 16:06:53,865 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 4, 'ledgerId': 0, 'merkleRoot': '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3', 'viewNo': None} 2017-10-23 16:06:53,865 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'params': {'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,865 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_REQUEST'}, 'Node4') 2017-10-23 16:06:53,865 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,865 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'params': {'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,866 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 2}, 'op': 'MESSAGE_REQUEST'}, 'Node4') 2017-10-23 16:06:53,866 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 2}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,866 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,866 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_REQUEST'}, 'Node4') 2017-10-23 16:06:53,866 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,866 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,866 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'msg_type': 'LEDGER_STATUS', 'params': {'ledgerId': 1}, 'op': 'MESSAGE_REQUEST'}, 'Node4') 2017-10-23 16:06:53,866 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox MESSAGE_REQUEST{'params': {'ledgerId': 1}, 'msg_type': 'LEDGER_STATUS'} 2017-10-23 16:06:53,867 | DEBUG | node.py (1257) | validateNodeMsg | Node1 received node message from Node4: VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:53,867 | DEBUG | node.py (1219) | handleOneNodeMsg | Node1 msg validated ({'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1', 'op': 'VIEW_CHANGE_DONE'}, 'Node4') 2017-10-23 16:06:53,867 | DEBUG | node.py (1286) | postToNodeInBox | Node1 appending to nodeInbox VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:53,867 | DEBUG | node.py ( 907) | process_current_state_message | Node1 processing current state CURRENT_STATE{'primary': [{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}], 'viewNo': 0} from Node4 2017-10-23 16:06:53,867 | DEBUG | node.py (1205) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}, 'Node4') 2017-10-23 16:06:53,867 | DEBUG | ledger_manager.py ( 246) | processLedgerStatus | Node1 received ledger status: LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 4, 'ledgerId': 0, 'merkleRoot': '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3', 'viewNo': None} from Node4 2017-10-23 16:06:53,868 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 16:06:53,868 | DEBUG | ledger_manager.py ( 984) | _compareLedger | Node1 comparing its ledger 0 of size 4 with 4 2017-10-23 16:06:53,868 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,868 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 2}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 1, 'ledgerId': 2, 'merkleRoot': 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,869 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,869 | DEBUG | node.py (2608) | send | Node1 sending message MESSAGE_RESPONSE{'params': {'ledgerId': 1}, 'msg': LEDGER_STATUS{'ppSeqNo': None, 'txnSeqNo': 7, 'ledgerId': 1, 'merkleRoot': '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o', 'viewNo': None}, 'msg_type': 'LEDGER_STATUS'} to 1 recipients: ['Node4'] 2017-10-23 16:06:53,869 | DEBUG | node.py (1205) | sendToElector | Node1 sending message to elector: (VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'}, 'Node4') 2017-10-23 16:06:53,870 | DEBUG | primary_selector.py ( 106) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node4 : VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:53,870 | DEBUG | message_processor.py ( 29) | discard | Node1 discarding message VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} because it already decided primary which is Node1:0 2017-10-23 16:06:53,870 | DEBUG | primary_selector.py ( 106) | _processViewChangeDoneMessage | Node1's primary selector started processing of ViewChangeDone msg from Node4 : VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} 2017-10-23 16:06:53,870 | DEBUG | message_processor.py ( 29) | discard | Node1 discarding message VIEW_CHANGE_DONE{'viewNo': 0, 'ledgerInfo': [[0, 4, '5xizCdcGJoYwSK5swMP4BDasTxDbULANZozqM2M2uRo3'], [1, 7, '6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o'], [2, 1, 'J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF']], 'name': 'Node1'} because it already decided primary which is Node1:0 2017-10-23 16:06:53,870 | DEBUG | batched.py ( 100) | flushOutBoxes | Node1 batching 4 msgs to Node4 into one transmission 2017-10-23 16:06:53,870 | TRACE | batched.py ( 101) | flushOutBoxes | messages: deque([b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":2,"merkleRoot":"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF","ppSeqNo":null,"txnSeqNo":1,"viewNo":null},"params":{"ledgerId":2},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}', b'{"msg_type":"LEDGER_STATUS","msg":{"ledgerId":1,"merkleRoot":"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o","ppSeqNo":null,"txnSeqNo":7,"viewNo":null},"params":{"ledgerId":1},"op":"MESSAGE_RESPONSE"}']) 2017-10-23 16:06:53,870 | TRACE | batched.py ( 110) | flushOutBoxes | Node1 sending payload to Node4: b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":1,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":1,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null,"op":"BATCH"}' 2017-10-23 16:06:53,871 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"messages":["{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":1,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":2,\\"merkleRoot\\":\\"J1HcLPgXFVNVgv8hhhD5Nbxg4taz8MEdD61eGZkkEJF\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":1,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":2},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}","{\\"msg_type\\":\\"LEDGER_STATUS\\",\\"msg\\":{\\"ledgerId\\":1,\\"merkleRoot\\":\\"6jJH537sAma75cnCHqpB3x5YXuFd3mWNa3cST4Syhr3o\\",\\"ppSeqNo\\":null,\\"txnSeqNo\\":7,\\"viewNo\\":null},\\"params\\":{\\"ledgerId\\":1},\\"op\\":\\"MESSAGE_RESPONSE\\"}"],"signature":null,"op":"BATCH"}' to Node4 2017-10-23 16:06:55,209 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 4 2017-10-23 16:07:02,081 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 1 2017-10-23 16:07:02,082 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:02,082 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:02,082 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:02,083 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:02,083 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:02,083 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:02,083 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 6 to run in 10 seconds 2017-10-23 16:07:07,202 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:07:07,202 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:07:07,202 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:07:07,203 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:07:12,083 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 6 2017-10-23 16:07:12,084 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:12,084 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:12,084 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:12,084 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:12,084 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:12,084 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:12,084 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 7 to run in 10 seconds 2017-10-23 16:07:22,093 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 7 2017-10-23 16:07:22,093 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:22,093 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:22,093 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:22,093 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:22,094 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:22,094 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:22,094 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 8 to run in 10 seconds 2017-10-23 16:07:22,210 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:07:22,211 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:07:22,211 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:07:22,211 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:07:32,095 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 8 2017-10-23 16:07:32,095 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:32,095 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:32,095 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:32,095 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:32,095 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:32,096 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:32,096 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 9 to run in 10 seconds 2017-10-23 16:07:37,216 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:07:37,217 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:07:37,217 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:07:37,218 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:07:42,107 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 9 2017-10-23 16:07:42,108 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:42,108 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:42,108 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:42,108 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:42,108 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:42,108 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:42,108 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 10 to run in 10 seconds 2017-10-23 16:07:52,083 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 2 2017-10-23 16:07:52,084 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:07:52,084 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:52,084 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 11 to run in 60 seconds 2017-10-23 16:07:52,084 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 1 2017-10-23 16:07:52,084 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:07:52,084 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 2 to run in 60 seconds 2017-10-23 16:07:52,109 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 10 2017-10-23 16:07:52,109 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:07:52,109 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:07:52,109 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:07:52,109 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:07:52,109 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:07:52,109 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:07:52,110 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 12 to run in 10 seconds 2017-10-23 16:07:52,204 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 5 2017-10-23 16:07:52,205 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 13 to run in 60 seconds 2017-10-23 16:07:52,205 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action propose_view_change with id 3 2017-10-23 16:07:52,205 | DEBUG | throttler.py ( 31) | acquire | now: 956372.567580151, len(actionsLog): 0 2017-10-23 16:07:52,205 | DEBUG | throttler.py ( 34) | acquire | after trim, len(actionsLog): 0 2017-10-23 16:07:52,205 | DEBUG | throttler.py ( 39) | acquire | len(actionsLog) was 0, after append, len(actionsLog): 1 2017-10-23 16:07:52,206 | INFO | node.py (2048) | sendInstanceChange | VIEW CHANGE: Node1 sending an instance change with view_no 1 since Primary of master protocol instance disconnected 2017-10-23 16:07:52,206 | INFO | node.py (2051) | sendInstanceChange | MONITORING: Node1 metrics for monitor: Node1 Monitor metrics:: None Delta: 0.4 Lambda: 60 Omega: 5 instances started: [956312.441001567, 956312.441351805] ordered request counts: {0: 0, 1: 0} ordered request durations: {0: 0, 1: 0} master request latencies: {} client avg request latencies: [{}, {}] throughput: {0: 0, 1: 0} master throughput: None total requests: 0 avg backup throughput: None master throughput ratio: None 2017-10-23 16:07:52,206 | DEBUG | node.py (2608) | send | Node1 sending message INSTANCE_CHANGE{'viewNo': 1, 'reason': 26} to all recipients: ['Node3', 'Node2', 'Node4'] 2017-10-23 16:07:52,206 | DEBUG | node.py (1958) | do_view_change_if_possible | Node1 has no quorum for view 1 2017-10-23 16:07:52,206 | INFO | node.py (2108) | propose_view_change | Node1 sent view change since was disconnected from primary for too long 2017-10-23 16:07:52,206 | DEBUG | node.py (1958) | do_view_change_if_possible | Node1 has no quorum for view 1 2017-10-23 16:07:52,206 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node3 2017-10-23 16:07:52,206 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node3 2017-10-23 16:07:52,207 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node2 2017-10-23 16:07:52,207 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node2 2017-10-23 16:07:52,207 | DEBUG | zstack.py ( 723) | transmit | Node1 transmitting message b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node4 2017-10-23 16:07:52,207 | TRACE | batched.py ( 96) | flushOutBoxes | Node1 sending msg b'{"viewNo":1,"reason":26,"op":"INSTANCE_CHANGE"}' to Node4 2017-10-23 16:07:52,220 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:07:52,220 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:07:52,220 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:07:52,220 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:08:02,118 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 12 2017-10-23 16:08:02,118 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:02,118 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:02,118 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:02,118 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:02,118 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:02,118 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:02,118 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 14 to run in 10 seconds 2017-10-23 16:08:07,227 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:08:07,227 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:08:07,227 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:08:07,227 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:08:12,124 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 14 2017-10-23 16:08:12,124 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:12,125 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:12,125 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:12,125 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:12,125 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:12,125 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:12,126 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 15 to run in 10 seconds 2017-10-23 16:08:22,132 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 15 2017-10-23 16:08:22,133 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:22,133 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:22,133 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:22,133 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:22,133 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:22,133 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:22,133 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 16 to run in 10 seconds 2017-10-23 16:08:22,228 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:08:22,229 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:08:22,229 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:08:22,229 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:08:32,144 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 16 2017-10-23 16:08:32,144 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:32,145 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:32,145 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:32,145 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:32,145 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:32,145 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:32,145 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 17 to run in 10 seconds 2017-10-23 16:08:37,239 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:08:37,239 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:08:37,239 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:08:37,239 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:08:42,152 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 17 2017-10-23 16:08:42,152 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:42,153 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:42,153 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:42,153 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:42,154 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:42,154 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:42,154 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 18 to run in 10 seconds 2017-10-23 16:08:52,092 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 11 2017-10-23 16:08:52,092 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:08:52,092 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:52,092 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 19 to run in 60 seconds 2017-10-23 16:08:52,092 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 2 2017-10-23 16:08:52,092 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:08:52,092 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 3 to run in 60 seconds 2017-10-23 16:08:52,164 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 18 2017-10-23 16:08:52,164 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:08:52,164 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:08:52,164 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:08:52,165 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:08:52,165 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:08:52,165 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:08:52,165 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 20 to run in 10 seconds 2017-10-23 16:08:52,213 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 13 2017-10-23 16:08:52,214 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 21 to run in 60 seconds 2017-10-23 16:08:52,251 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:08:52,251 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:08:52,251 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:08:52,251 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:09:02,170 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 20 2017-10-23 16:09:02,170 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:02,170 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:02,171 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:02,171 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:02,171 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:02,171 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:02,171 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 22 to run in 10 seconds 2017-10-23 16:09:07,252 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:09:07,253 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:09:07,253 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:09:07,253 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:09:12,172 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 22 2017-10-23 16:09:12,173 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:12,173 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:12,173 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:12,173 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:12,173 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:12,173 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:12,173 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 23 to run in 10 seconds 2017-10-23 16:09:22,178 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 23 2017-10-23 16:09:22,178 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:22,178 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:22,179 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:22,179 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:22,179 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:22,179 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:22,179 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 24 to run in 10 seconds 2017-10-23 16:09:22,259 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:09:22,260 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:09:22,260 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:09:22,260 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:09:32,183 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 24 2017-10-23 16:09:32,183 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:32,183 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:32,183 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:32,183 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:32,183 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:32,184 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:32,184 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 25 to run in 10 seconds 2017-10-23 16:09:37,264 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:09:37,264 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:09:37,266 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:09:37,268 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:09:42,190 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 25 2017-10-23 16:09:42,191 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:42,191 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:42,191 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:42,191 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:42,192 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:42,192 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:42,192 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 26 to run in 10 seconds 2017-10-23 16:09:52,095 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 19 2017-10-23 16:09:52,096 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:09:52,096 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:52,096 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 27 to run in 60 seconds 2017-10-23 16:09:52,096 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 3 2017-10-23 16:09:52,096 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:09:52,096 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 4 to run in 60 seconds 2017-10-23 16:09:52,194 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 26 2017-10-23 16:09:52,194 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:09:52,194 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:09:52,194 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:09:52,195 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:09:52,195 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:09:52,195 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:09:52,195 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 28 to run in 10 seconds 2017-10-23 16:09:52,219 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 21 2017-10-23 16:09:52,220 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 29 to run in 60 seconds 2017-10-23 16:09:52,271 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:09:52,271 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:09:52,271 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:09:52,272 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:10:02,202 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 28 2017-10-23 16:10:02,202 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:02,203 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:02,203 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:02,203 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:02,203 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:02,203 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:02,203 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 30 to run in 10 seconds 2017-10-23 16:10:07,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:10:07,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:10:07,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:10:07,276 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:10:12,212 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 30 2017-10-23 16:10:12,212 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:12,212 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:12,213 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:12,213 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:12,213 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:12,213 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:12,213 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 31 to run in 10 seconds 2017-10-23 16:10:22,214 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 31 2017-10-23 16:10:22,214 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:22,214 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:22,214 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:22,215 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:22,215 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:22,215 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:22,215 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 32 to run in 10 seconds 2017-10-23 16:10:22,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:10:22,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:10:22,276 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:10:22,276 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:10:32,221 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 32 2017-10-23 16:10:32,221 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:32,221 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:32,222 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:32,222 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:32,222 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:32,222 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:32,222 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 33 to run in 10 seconds 2017-10-23 16:10:37,283 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:10:37,284 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:10:37,284 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:10:37,284 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:10:42,234 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 33 2017-10-23 16:10:42,234 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:42,234 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:42,235 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:42,235 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:42,235 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:42,236 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:42,236 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 34 to run in 10 seconds 2017-10-23 16:10:52,103 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 27 2017-10-23 16:10:52,103 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:10:52,103 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:52,104 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 35 to run in 60 seconds 2017-10-23 16:10:52,104 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 4 2017-10-23 16:10:52,104 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:10:52,104 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 5 to run in 60 seconds 2017-10-23 16:10:52,231 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 29 2017-10-23 16:10:52,232 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 36 to run in 60 seconds 2017-10-23 16:10:52,246 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 34 2017-10-23 16:10:52,246 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:10:52,247 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:10:52,247 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:10:52,247 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:10:52,247 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:10:52,247 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:10:52,247 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 37 to run in 10 seconds 2017-10-23 16:10:52,285 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:10:52,285 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:10:52,285 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:10:52,286 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:11:02,254 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 37 2017-10-23 16:11:02,255 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:02,255 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:02,255 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:02,255 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:02,255 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:02,255 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:02,255 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 38 to run in 10 seconds 2017-10-23 16:11:07,287 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:11:07,287 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:11:07,287 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:11:07,290 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:11:12,265 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 38 2017-10-23 16:11:12,265 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:12,267 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:12,267 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:12,267 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:12,267 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:12,267 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:12,267 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 39 to run in 10 seconds 2017-10-23 16:11:22,276 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 39 2017-10-23 16:11:22,276 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:22,276 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:22,277 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:22,277 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:22,277 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:22,277 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:22,277 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 40 to run in 10 seconds 2017-10-23 16:11:22,293 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:11:22,293 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:11:22,293 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:11:22,296 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:11:32,281 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 40 2017-10-23 16:11:32,282 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:32,282 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:32,282 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:32,282 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:32,282 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:32,282 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:32,282 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 41 to run in 10 seconds 2017-10-23 16:11:37,297 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:11:37,298 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:11:37,298 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:11:37,298 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:11:42,291 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 41 2017-10-23 16:11:42,291 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:42,292 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:42,292 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:42,293 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:42,293 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:42,293 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:42,293 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 42 to run in 10 seconds 2017-10-23 16:11:52,114 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 35 2017-10-23 16:11:52,115 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:11:52,115 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:52,115 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 43 to run in 60 seconds 2017-10-23 16:11:52,116 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 5 2017-10-23 16:11:52,116 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:11:52,116 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 6 to run in 60 seconds 2017-10-23 16:11:52,243 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 36 2017-10-23 16:11:52,245 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 44 to run in 60 seconds 2017-10-23 16:11:52,298 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 42 2017-10-23 16:11:52,298 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:11:52,298 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:11:52,299 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:11:52,299 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:11:52,299 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:11:52,300 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:11:52,300 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 45 to run in 10 seconds 2017-10-23 16:11:52,301 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:11:52,301 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:11:52,301 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:11:52,302 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:12:02,308 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 45 2017-10-23 16:12:02,308 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:02,308 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:02,308 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:02,308 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:02,308 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:02,308 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:02,309 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 46 to run in 10 seconds 2017-10-23 16:12:07,315 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:12:07,315 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:12:07,315 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:12:07,315 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:12:12,323 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 46 2017-10-23 16:12:12,324 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:12,324 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:12,324 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:12,324 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:12,324 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:12,325 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:12,325 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 47 to run in 10 seconds 2017-10-23 16:12:22,326 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 47 2017-10-23 16:12:22,326 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:22,326 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:22,326 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:22,326 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:22,326 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:22,327 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:22,327 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 48 to run in 10 seconds 2017-10-23 16:12:22,327 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:12:22,327 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:12:22,328 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:12:22,328 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:12:32,331 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 48 2017-10-23 16:12:32,332 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:32,332 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:32,332 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:32,332 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:32,333 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:32,333 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:32,333 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 49 to run in 10 seconds 2017-10-23 16:12:37,328 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:12:37,328 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:12:37,328 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:12:37,329 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:12:42,345 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 49 2017-10-23 16:12:42,346 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:42,349 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:42,350 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:42,350 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:42,350 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:42,350 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:42,351 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 50 to run in 10 seconds 2017-10-23 16:12:52,116 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 43 2017-10-23 16:12:52,116 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:12:52,116 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:52,116 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 51 to run in 60 seconds 2017-10-23 16:12:52,116 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 6 2017-10-23 16:12:52,117 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:12:52,117 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 7 to run in 60 seconds 2017-10-23 16:12:52,259 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 44 2017-10-23 16:12:52,260 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 52 to run in 60 seconds 2017-10-23 16:12:52,338 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:12:52,338 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:12:52,338 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:12:52,339 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:12:52,361 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 50 2017-10-23 16:12:52,362 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:12:52,362 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:12:52,362 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:12:52,362 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:12:52,362 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:12:52,362 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:12:52,362 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 53 to run in 10 seconds 2017-10-23 16:13:02,367 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 53 2017-10-23 16:13:02,367 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:02,368 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:02,368 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:02,368 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:02,368 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:02,368 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:02,368 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 54 to run in 10 seconds 2017-10-23 16:13:07,340 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:13:07,340 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:13:07,340 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:13:07,341 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:13:12,373 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 54 2017-10-23 16:13:12,374 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:12,374 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:12,374 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:12,375 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:12,375 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:12,375 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:12,375 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 55 to run in 10 seconds 2017-10-23 16:13:22,344 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:13:22,345 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:13:22,345 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:13:22,346 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:13:22,381 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 55 2017-10-23 16:13:22,382 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:22,382 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:22,382 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:22,382 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:22,382 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:22,382 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:22,382 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 56 to run in 10 seconds 2017-10-23 16:13:32,389 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 56 2017-10-23 16:13:32,390 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:32,390 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:32,390 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:32,390 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:32,390 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:32,390 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:32,391 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 57 to run in 10 seconds 2017-10-23 16:13:37,359 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:13:37,359 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:13:37,359 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:13:37,360 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:13:42,395 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 57 2017-10-23 16:13:42,395 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:42,395 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:42,396 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:42,396 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:42,396 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:42,396 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:42,396 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 58 to run in 10 seconds 2017-10-23 16:13:52,127 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 51 2017-10-23 16:13:52,127 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:13:52,127 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:52,127 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 59 to run in 60 seconds 2017-10-23 16:13:52,127 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 7 2017-10-23 16:13:52,127 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:13:52,128 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 8 to run in 60 seconds 2017-10-23 16:13:52,267 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 52 2017-10-23 16:13:52,267 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 60 to run in 60 seconds 2017-10-23 16:13:52,359 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:13:52,360 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:13:52,360 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:13:52,360 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:13:52,406 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 58 2017-10-23 16:13:52,406 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:13:52,406 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:13:52,407 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:13:52,407 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:13:52,407 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:13:52,407 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:13:52,407 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 61 to run in 10 seconds 2017-10-23 16:14:02,411 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 61 2017-10-23 16:14:02,411 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:02,411 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:02,411 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:02,411 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:02,411 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:02,411 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:02,411 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 62 to run in 10 seconds 2017-10-23 16:14:07,368 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:14:07,368 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:14:07,368 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:14:07,369 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:14:12,427 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 62 2017-10-23 16:14:12,428 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:12,428 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:12,428 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:12,428 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:12,428 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:12,428 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:12,428 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 63 to run in 10 seconds 2017-10-23 16:14:22,373 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:14:22,373 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:14:22,373 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:14:22,374 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:14:22,436 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 63 2017-10-23 16:14:22,437 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:22,437 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:22,437 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:22,437 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:22,437 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:22,437 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:22,438 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 64 to run in 10 seconds 2017-10-23 16:14:32,442 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 64 2017-10-23 16:14:32,442 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:32,443 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:32,443 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:32,443 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:32,443 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:32,443 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:32,443 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 65 to run in 10 seconds 2017-10-23 16:14:37,374 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:14:37,374 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:14:37,374 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:14:37,374 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:14:42,451 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 65 2017-10-23 16:14:42,451 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:42,451 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:42,451 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:42,451 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:42,452 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:42,452 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:42,452 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 66 to run in 10 seconds 2017-10-23 16:14:52,139 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 59 2017-10-23 16:14:52,139 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:14:52,139 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:52,140 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 67 to run in 60 seconds 2017-10-23 16:14:52,140 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 8 2017-10-23 16:14:52,140 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:14:52,140 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 9 to run in 60 seconds 2017-10-23 16:14:52,272 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 60 2017-10-23 16:14:52,272 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 68 to run in 60 seconds 2017-10-23 16:14:52,378 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:14:52,379 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:14:52,379 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:14:52,379 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:14:52,455 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 66 2017-10-23 16:14:52,455 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:14:52,456 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:14:52,456 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:14:52,456 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:14:52,456 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:14:52,456 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:14:52,456 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 69 to run in 10 seconds 2017-10-23 16:15:02,465 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 69 2017-10-23 16:15:02,465 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:02,465 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:02,465 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:02,465 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:02,465 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:02,466 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:02,466 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 70 to run in 10 seconds 2017-10-23 16:15:07,380 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:15:07,381 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:15:07,381 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:15:07,382 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:15:12,466 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 70 2017-10-23 16:15:12,466 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:12,466 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:12,466 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:12,466 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:12,467 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:12,467 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:12,467 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 71 to run in 10 seconds 2017-10-23 16:15:22,391 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:15:22,391 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:15:22,391 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:15:22,391 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:15:22,469 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 71 2017-10-23 16:15:22,469 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:22,469 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:22,469 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:22,469 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:22,469 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:22,469 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:22,469 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 72 to run in 10 seconds 2017-10-23 16:15:32,470 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 72 2017-10-23 16:15:32,470 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:32,470 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:32,470 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:32,470 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:32,470 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:32,470 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:32,471 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 73 to run in 10 seconds 2017-10-23 16:15:37,393 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:15:37,393 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:15:37,393 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:15:37,394 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:15:42,471 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 73 2017-10-23 16:15:42,471 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:42,472 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:42,472 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:42,472 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:42,472 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:42,472 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:42,472 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 74 to run in 10 seconds 2017-10-23 16:15:52,147 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 67 2017-10-23 16:15:52,147 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:15:52,147 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:52,147 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 75 to run in 60 seconds 2017-10-23 16:15:52,147 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 9 2017-10-23 16:15:52,148 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:15:52,148 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 10 to run in 60 seconds 2017-10-23 16:15:52,274 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 68 2017-10-23 16:15:52,275 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 76 to run in 60 seconds 2017-10-23 16:15:52,393 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:15:52,394 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:15:52,394 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:15:52,395 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:15:52,481 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 74 2017-10-23 16:15:52,481 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:15:52,482 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:15:52,482 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:15:52,482 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:15:52,482 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:15:52,482 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:15:52,482 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 77 to run in 10 seconds 2017-10-23 16:16:02,486 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 77 2017-10-23 16:16:02,486 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:02,486 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:02,486 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:02,486 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:02,487 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:02,487 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:02,487 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 78 to run in 10 seconds 2017-10-23 16:16:07,406 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:16:07,406 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:16:07,406 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:16:07,407 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:16:12,494 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 78 2017-10-23 16:16:12,494 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:12,495 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:12,495 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:12,495 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:12,496 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:12,496 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:12,496 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 79 to run in 10 seconds 2017-10-23 16:16:22,416 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:16:22,416 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:16:22,416 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:16:22,417 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:16:22,503 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 79 2017-10-23 16:16:22,504 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:22,504 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:22,504 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:22,504 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:22,504 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:22,505 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:22,505 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 80 to run in 10 seconds 2017-10-23 16:16:32,506 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 80 2017-10-23 16:16:32,507 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:32,507 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:32,507 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:32,507 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:32,507 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:32,507 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:32,507 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 81 to run in 10 seconds 2017-10-23 16:16:37,420 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:16:37,420 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:16:37,420 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:16:37,420 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:16:42,516 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 81 2017-10-23 16:16:42,516 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:42,516 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:42,516 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:42,516 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:42,516 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:42,516 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:42,516 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 82 to run in 10 seconds 2017-10-23 16:16:52,150 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 75 2017-10-23 16:16:52,150 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:16:52,151 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:52,151 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 83 to run in 60 seconds 2017-10-23 16:16:52,151 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 10 2017-10-23 16:16:52,151 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:16:52,152 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 11 to run in 60 seconds 2017-10-23 16:16:52,284 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 76 2017-10-23 16:16:52,284 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 84 to run in 60 seconds 2017-10-23 16:16:52,432 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:16:52,432 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:16:52,433 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:16:52,433 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:16:52,530 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 82 2017-10-23 16:16:52,530 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:16:52,531 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:16:52,531 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:16:52,531 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:16:52,531 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:16:52,532 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:16:52,532 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 85 to run in 10 seconds 2017-10-23 16:17:02,532 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 85 2017-10-23 16:17:02,533 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:02,533 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:02,533 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:02,533 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:02,533 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:02,533 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:02,533 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 86 to run in 10 seconds 2017-10-23 16:17:07,435 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:17:07,435 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:17:07,437 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:17:07,437 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:17:12,540 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 86 2017-10-23 16:17:12,540 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:12,541 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:12,541 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:12,541 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:12,541 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:12,541 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:12,541 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 87 to run in 10 seconds 2017-10-23 16:17:22,438 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:17:22,438 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:17:22,438 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:17:22,438 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:17:22,549 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 87 2017-10-23 16:17:22,549 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:22,549 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:22,549 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:22,549 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:22,549 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:22,549 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:22,549 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 88 to run in 10 seconds 2017-10-23 16:17:32,551 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 88 2017-10-23 16:17:32,552 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:32,552 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:32,552 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:32,552 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:32,552 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:32,552 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:32,552 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 89 to run in 10 seconds 2017-10-23 16:17:37,441 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:17:37,441 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:17:37,441 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:17:37,442 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:17:42,555 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 89 2017-10-23 16:17:42,556 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:42,557 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:42,557 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:42,557 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:42,558 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:42,558 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:42,558 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 90 to run in 10 seconds 2017-10-23 16:17:52,156 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 83 2017-10-23 16:17:52,156 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:17:52,156 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:52,156 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 91 to run in 60 seconds 2017-10-23 16:17:52,156 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 11 2017-10-23 16:17:52,156 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:17:52,156 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 12 to run in 60 seconds 2017-10-23 16:17:52,288 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 84 2017-10-23 16:17:52,289 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 92 to run in 60 seconds 2017-10-23 16:17:52,452 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:17:52,452 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:17:52,452 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:17:52,453 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:17:52,559 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 90 2017-10-23 16:17:52,559 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:17:52,559 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:17:52,559 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:17:52,559 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:17:52,559 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:17:52,559 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:17:52,559 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 93 to run in 10 seconds 2017-10-23 16:18:02,571 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 93 2017-10-23 16:18:02,571 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:02,571 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:02,571 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:02,571 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:02,571 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:02,572 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:02,572 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 94 to run in 10 seconds 2017-10-23 16:18:07,452 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:18:07,453 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:18:07,453 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:18:07,454 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:18:12,573 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 94 2017-10-23 16:18:12,573 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:12,573 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:12,573 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:12,573 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:12,573 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:12,573 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:12,573 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 95 to run in 10 seconds 2017-10-23 16:18:22,458 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:18:22,458 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:18:22,458 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:18:22,459 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:18:22,579 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 95 2017-10-23 16:18:22,579 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:22,579 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:22,579 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:22,579 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:22,579 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:22,579 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:22,580 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 96 to run in 10 seconds 2017-10-23 16:18:32,586 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 96 2017-10-23 16:18:32,587 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:32,587 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:32,587 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:32,587 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:32,587 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:32,587 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:32,587 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 97 to run in 10 seconds 2017-10-23 16:18:37,464 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:18:37,464 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:18:37,464 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:18:37,465 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:18:42,589 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 97 2017-10-23 16:18:42,589 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:42,590 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:42,590 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:42,590 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:42,590 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:42,590 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:42,590 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 98 to run in 10 seconds 2017-10-23 16:18:52,164 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 91 2017-10-23 16:18:52,165 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:18:52,165 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:52,165 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 99 to run in 60 seconds 2017-10-23 16:18:52,165 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 12 2017-10-23 16:18:52,165 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:18:52,165 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 13 to run in 60 seconds 2017-10-23 16:18:52,291 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 92 2017-10-23 16:18:52,292 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 100 to run in 60 seconds 2017-10-23 16:18:52,479 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:18:52,479 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:18:52,479 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:18:52,480 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:18:52,596 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 98 2017-10-23 16:18:52,596 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:18:52,596 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:18:52,597 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:18:52,600 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:18:52,600 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:18:52,600 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:18:52,600 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 101 to run in 10 seconds 2017-10-23 16:19:02,603 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 101 2017-10-23 16:19:02,604 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:02,604 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:02,604 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:02,604 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:02,604 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:02,604 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:02,604 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 102 to run in 10 seconds 2017-10-23 16:19:07,486 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:19:07,486 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:19:07,486 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:19:07,487 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:19:12,613 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 102 2017-10-23 16:19:12,613 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:12,613 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:12,614 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:12,614 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:12,614 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:12,614 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:12,614 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 103 to run in 10 seconds 2017-10-23 16:19:22,489 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:19:22,489 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:19:22,489 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:19:22,490 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:19:22,616 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 103 2017-10-23 16:19:22,616 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:22,616 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:22,617 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:22,617 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:22,617 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:22,617 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:22,617 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 104 to run in 10 seconds 2017-10-23 16:19:32,617 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 104 2017-10-23 16:19:32,617 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:32,618 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:32,618 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:32,618 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:32,618 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:32,618 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:32,618 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 105 to run in 10 seconds 2017-10-23 16:19:37,497 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:19:37,497 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:19:37,497 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:19:37,497 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:19:42,624 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 105 2017-10-23 16:19:42,624 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:42,625 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:42,625 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:42,625 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:42,626 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:42,626 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:42,626 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 106 to run in 10 seconds 2017-10-23 16:19:52,168 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 99 2017-10-23 16:19:52,168 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:19:52,168 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:52,168 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 107 to run in 60 seconds 2017-10-23 16:19:52,168 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 13 2017-10-23 16:19:52,168 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:19:52,169 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 14 to run in 60 seconds 2017-10-23 16:19:52,294 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 100 2017-10-23 16:19:52,294 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 108 to run in 60 seconds 2017-10-23 16:19:52,502 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:19:52,502 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:19:52,502 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:19:52,503 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:19:52,632 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 106 2017-10-23 16:19:52,632 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:19:52,632 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:19:52,632 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:19:52,633 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:19:52,633 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:19:52,633 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:19:52,634 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 109 to run in 10 seconds 2017-10-23 16:20:02,637 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 109 2017-10-23 16:20:02,637 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:02,638 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:02,638 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:02,638 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:02,638 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:02,638 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:02,638 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 110 to run in 10 seconds 2017-10-23 16:20:07,505 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:20:07,505 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:20:07,506 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:20:07,506 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:20:12,643 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 110 2017-10-23 16:20:12,643 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:12,643 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:12,643 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:12,644 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:12,644 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:12,644 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:12,644 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 111 to run in 10 seconds 2017-10-23 16:20:22,520 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:20:22,521 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:20:22,521 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:20:22,522 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:20:22,654 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 111 2017-10-23 16:20:22,655 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:22,655 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:22,655 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:22,655 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:22,655 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:22,655 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:22,655 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 112 to run in 10 seconds 2017-10-23 16:20:32,666 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 112 2017-10-23 16:20:32,666 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:32,666 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:32,667 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:32,667 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:32,667 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:32,667 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:32,667 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 113 to run in 10 seconds 2017-10-23 16:20:37,525 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:20:37,525 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:20:37,525 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:20:37,526 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:20:42,682 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 113 2017-10-23 16:20:42,682 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:42,682 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:42,682 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:42,682 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:42,682 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:42,682 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:42,682 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 114 to run in 10 seconds 2017-10-23 16:20:52,175 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 107 2017-10-23 16:20:52,175 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:20:52,176 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:52,176 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 115 to run in 60 seconds 2017-10-23 16:20:52,176 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 14 2017-10-23 16:20:52,176 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:20:52,176 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 15 to run in 60 seconds 2017-10-23 16:20:52,294 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 108 2017-10-23 16:20:52,296 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 116 to run in 60 seconds 2017-10-23 16:20:52,534 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:20:52,535 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:20:52,535 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:20:52,536 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:20:52,691 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 114 2017-10-23 16:20:52,691 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:20:52,692 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:20:52,692 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:20:52,692 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:20:52,692 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:20:52,692 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:20:52,692 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 117 to run in 10 seconds 2017-10-23 16:21:02,700 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 117 2017-10-23 16:21:02,700 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:02,700 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:21:02,700 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:02,701 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:02,701 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:02,701 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:02,701 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 118 to run in 10 seconds 2017-10-23 16:21:07,537 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:21:07,538 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:21:07,538 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:21:07,539 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:21:12,710 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 118 2017-10-23 16:21:12,710 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:12,710 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a NodeRequestSuspiciousSpike spike 2017-10-23 16:21:12,710 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:12,710 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:12,710 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:12,710 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:12,710 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 119 to run in 10 seconds 2017-10-23 16:21:22,544 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:21:22,544 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:21:22,545 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:21:22,545 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:21:22,715 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 119 2017-10-23 16:21:22,716 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:22,716 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:21:22,716 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:22,716 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:22,716 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:22,717 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:22,717 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 120 to run in 10 seconds 2017-10-23 16:21:32,724 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 120 2017-10-23 16:21:32,724 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:32,724 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:21:32,724 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:32,724 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:32,725 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:32,725 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:32,725 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 121 to run in 10 seconds 2017-10-23 16:21:37,547 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:21:37,547 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:21:37,547 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:21:37,547 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:21:42,731 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 121 2017-10-23 16:21:42,731 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:42,731 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:21:42,731 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:42,732 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:42,732 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:42,732 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:42,732 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 122 to run in 10 seconds 2017-10-23 16:21:52,185 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 115 2017-10-23 16:21:52,186 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:21:52,186 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:21:52,186 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 123 to run in 60 seconds 2017-10-23 16:21:52,186 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 15 2017-10-23 16:21:52,186 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:21:52,186 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 16 to run in 60 seconds 2017-10-23 16:21:52,296 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 116 2017-10-23 16:21:52,297 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 124 to run in 60 seconds 2017-10-23 16:21:52,556 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:21:52,556 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:21:52,557 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:21:52,557 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:21:52,741 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 122 2017-10-23 16:21:52,741 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:21:52,741 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:21:52,741 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:21:52,741 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:21:52,741 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:21:52,741 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:21:52,741 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 125 to run in 10 seconds 2017-10-23 16:22:02,748 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 125 2017-10-23 16:22:02,749 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:02,749 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:02,749 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:02,749 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:02,749 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:02,749 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:02,749 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 126 to run in 10 seconds 2017-10-23 16:22:07,563 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:22:07,563 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:22:07,565 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:22:07,566 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:22:12,750 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 126 2017-10-23 16:22:12,751 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:12,751 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:12,751 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:12,751 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:12,751 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:12,751 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:12,752 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 127 to run in 10 seconds 2017-10-23 16:22:22,564 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:22:22,564 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:22:22,564 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:22:22,565 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:22:22,765 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 127 2017-10-23 16:22:22,766 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:22,766 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:22,766 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:22,766 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:22,766 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:22,766 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:22,766 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 128 to run in 10 seconds 2017-10-23 16:22:32,776 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 128 2017-10-23 16:22:32,776 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:32,777 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:32,777 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:32,777 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:32,777 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:32,777 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:32,777 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 129 to run in 10 seconds 2017-10-23 16:22:37,565 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:22:37,566 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:22:37,566 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:22:37,566 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:22:42,788 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 129 2017-10-23 16:22:42,788 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:42,788 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:42,788 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:42,788 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:42,788 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:42,788 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:42,789 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 130 to run in 10 seconds 2017-10-23 16:22:52,195 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 123 2017-10-23 16:22:52,195 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:22:52,195 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:52,196 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 131 to run in 60 seconds 2017-10-23 16:22:52,196 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 16 2017-10-23 16:22:52,196 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:22:52,196 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 17 to run in 60 seconds 2017-10-23 16:22:52,299 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 124 2017-10-23 16:22:52,300 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 132 to run in 60 seconds 2017-10-23 16:22:52,568 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:22:52,568 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:22:52,568 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:22:52,569 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:22:52,795 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 130 2017-10-23 16:22:52,795 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:22:52,796 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:22:52,797 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:22:52,797 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:22:52,797 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:22:52,797 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:22:52,797 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 133 to run in 10 seconds 2017-10-23 16:23:02,800 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 133 2017-10-23 16:23:02,800 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:02,800 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:02,800 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:02,800 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:02,800 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:02,800 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:02,800 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 134 to run in 10 seconds 2017-10-23 16:23:07,571 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:23:07,572 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:23:07,572 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:23:07,572 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:23:12,803 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 134 2017-10-23 16:23:12,803 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:12,803 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:12,804 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:12,804 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:12,804 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:12,804 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:12,804 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 135 to run in 10 seconds 2017-10-23 16:23:22,574 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:23:22,574 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:23:22,574 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:23:22,574 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:23:22,806 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 135 2017-10-23 16:23:22,806 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:22,806 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:22,806 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:22,807 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:22,807 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:22,807 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:22,807 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 136 to run in 10 seconds 2017-10-23 16:23:32,810 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 136 2017-10-23 16:23:32,810 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:32,810 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:32,810 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:32,810 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:32,810 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:32,811 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:32,811 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 137 to run in 10 seconds 2017-10-23 16:23:37,579 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:23:37,579 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:23:37,579 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:23:37,580 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:23:42,822 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 137 2017-10-23 16:23:42,823 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:42,823 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:42,823 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:42,823 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:42,823 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:42,823 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:42,824 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 138 to run in 10 seconds 2017-10-23 16:23:52,198 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 131 2017-10-23 16:23:52,198 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:23:52,198 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:52,198 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 139 to run in 60 seconds 2017-10-23 16:23:52,199 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 17 2017-10-23 16:23:52,199 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:23:52,199 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 18 to run in 60 seconds 2017-10-23 16:23:52,305 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 132 2017-10-23 16:23:52,306 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 140 to run in 60 seconds 2017-10-23 16:23:52,580 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:23:52,580 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:23:52,580 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:23:52,580 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:23:52,829 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 138 2017-10-23 16:23:52,830 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:23:52,830 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:23:52,830 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:23:52,831 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:23:52,831 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:23:52,831 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:23:52,831 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 141 to run in 10 seconds 2017-10-23 16:24:02,834 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 141 2017-10-23 16:24:02,834 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:02,835 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:02,835 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:02,835 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:02,835 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:02,835 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:02,835 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 142 to run in 10 seconds 2017-10-23 16:24:07,585 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:24:07,585 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:24:07,585 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:24:07,586 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:24:12,844 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 142 2017-10-23 16:24:12,844 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:12,844 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:12,844 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:12,845 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:12,845 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:12,845 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:12,845 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 143 to run in 10 seconds 2017-10-23 16:24:22,591 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:24:22,591 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:24:22,591 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:24:22,591 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:24:22,847 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 143 2017-10-23 16:24:22,847 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:22,847 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:22,847 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:22,847 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:22,847 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:22,847 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:22,847 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 144 to run in 10 seconds 2017-10-23 16:24:32,851 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 144 2017-10-23 16:24:32,851 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:32,851 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:32,851 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:32,851 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:32,851 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:32,851 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:32,851 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 145 to run in 10 seconds 2017-10-23 16:24:37,592 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:24:37,592 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:24:37,592 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:24:37,593 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:24:42,860 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 145 2017-10-23 16:24:42,860 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:42,861 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:42,861 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:42,861 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:42,861 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:42,861 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:42,861 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 146 to run in 10 seconds 2017-10-23 16:24:52,202 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 139 2017-10-23 16:24:52,202 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:24:52,202 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:52,203 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 147 to run in 60 seconds 2017-10-23 16:24:52,203 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 18 2017-10-23 16:24:52,203 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:24:52,203 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 19 to run in 60 seconds 2017-10-23 16:24:52,306 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 140 2017-10-23 16:24:52,306 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 148 to run in 60 seconds 2017-10-23 16:24:52,594 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:24:52,594 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:24:52,595 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:24:52,596 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:24:52,865 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 146 2017-10-23 16:24:52,865 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:24:52,865 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:24:52,865 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:24:52,865 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:24:52,866 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:24:52,866 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:24:52,866 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 149 to run in 10 seconds 2017-10-23 16:25:02,871 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 149 2017-10-23 16:25:02,871 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:02,872 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:02,872 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:02,872 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:02,872 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:02,872 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:02,872 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 150 to run in 10 seconds 2017-10-23 16:25:07,602 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:25:07,603 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:25:07,603 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:25:07,603 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:25:12,879 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 150 2017-10-23 16:25:12,879 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:12,879 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:12,879 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:12,880 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:12,880 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:12,880 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:12,880 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 151 to run in 10 seconds 2017-10-23 16:25:22,604 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:25:22,604 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:25:22,605 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:25:22,606 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:25:22,891 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 151 2017-10-23 16:25:22,891 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:22,891 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:22,891 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:22,891 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:22,891 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:22,891 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:22,891 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 152 to run in 10 seconds 2017-10-23 16:25:32,895 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 152 2017-10-23 16:25:32,895 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:32,895 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:32,895 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:32,895 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:32,895 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:32,895 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:32,896 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 153 to run in 10 seconds 2017-10-23 16:25:37,617 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:25:37,617 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:25:37,617 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:25:37,617 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:25:42,906 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 153 2017-10-23 16:25:42,906 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:42,906 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:42,907 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:42,907 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:42,907 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:42,907 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:42,907 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 154 to run in 10 seconds 2017-10-23 16:25:52,212 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 147 2017-10-23 16:25:52,212 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:25:52,212 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:52,212 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 155 to run in 60 seconds 2017-10-23 16:25:52,212 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 19 2017-10-23 16:25:52,212 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:25:52,212 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 20 to run in 60 seconds 2017-10-23 16:25:52,309 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 148 2017-10-23 16:25:52,311 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 156 to run in 60 seconds 2017-10-23 16:25:52,620 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:25:52,620 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:25:52,620 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:25:52,621 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:25:52,907 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 154 2017-10-23 16:25:52,908 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:25:52,908 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:25:52,908 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:25:52,908 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:25:52,908 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:25:52,908 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:25:52,908 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 157 to run in 10 seconds 2017-10-23 16:26:02,920 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 157 2017-10-23 16:26:02,920 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:02,920 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:02,921 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:02,921 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:02,921 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:02,921 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:02,921 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 158 to run in 10 seconds 2017-10-23 16:26:07,626 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:26:07,626 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:26:07,626 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:26:07,627 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:26:12,929 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 158 2017-10-23 16:26:12,929 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:12,929 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:12,930 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:12,930 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:12,930 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:12,930 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:12,930 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 159 to run in 10 seconds 2017-10-23 16:26:22,629 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:26:22,630 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:26:22,630 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:26:22,631 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:26:22,944 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 159 2017-10-23 16:26:22,944 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:22,944 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:22,945 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:22,945 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:22,945 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:22,945 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:22,945 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 160 to run in 10 seconds 2017-10-23 16:26:32,949 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 160 2017-10-23 16:26:32,950 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:32,950 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:32,950 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:32,950 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:32,950 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:32,950 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:32,950 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 161 to run in 10 seconds 2017-10-23 16:26:37,640 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:26:37,640 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:26:37,640 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:26:37,641 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:26:42,960 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 161 2017-10-23 16:26:42,960 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:42,960 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:42,960 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:42,961 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:42,961 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:42,961 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:42,961 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 162 to run in 10 seconds 2017-10-23 16:26:52,213 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 155 2017-10-23 16:26:52,213 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:26:52,213 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:52,213 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 163 to run in 60 seconds 2017-10-23 16:26:52,213 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 20 2017-10-23 16:26:52,213 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:26:52,213 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 21 to run in 60 seconds 2017-10-23 16:26:52,320 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 156 2017-10-23 16:26:52,320 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 164 to run in 60 seconds 2017-10-23 16:26:52,650 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:26:52,651 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:26:52,651 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:26:52,652 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:26:52,970 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 162 2017-10-23 16:26:52,970 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:26:52,970 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:26:52,970 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:26:52,970 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:26:52,970 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:26:52,970 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:26:52,971 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 165 to run in 10 seconds 2017-10-23 16:27:02,972 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 165 2017-10-23 16:27:02,972 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:02,972 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:02,972 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:02,972 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:02,972 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:02,972 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:02,973 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 166 to run in 10 seconds 2017-10-23 16:27:07,659 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:27:07,660 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:27:07,660 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:27:07,660 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:27:12,974 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 166 2017-10-23 16:27:12,975 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:12,975 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:12,975 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:12,975 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:12,975 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:12,975 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:12,975 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 167 to run in 10 seconds 2017-10-23 16:27:22,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:27:22,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:27:22,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:27:22,661 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:27:22,979 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 167 2017-10-23 16:27:22,981 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:22,981 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:22,981 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:22,981 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:22,981 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:22,981 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:22,982 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 168 to run in 10 seconds 2017-10-23 16:27:32,988 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 168 2017-10-23 16:27:32,988 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:32,988 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:32,988 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:32,988 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:32,988 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:32,988 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:32,988 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 169 to run in 10 seconds 2017-10-23 16:27:37,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:27:37,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:27:37,661 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:27:37,661 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:27:42,995 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 169 2017-10-23 16:27:42,995 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:42,995 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:42,995 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:42,995 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:42,995 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:42,995 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:42,995 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 170 to run in 10 seconds 2017-10-23 16:27:52,217 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 163 2017-10-23 16:27:52,217 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:27:52,217 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:52,217 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 171 to run in 60 seconds 2017-10-23 16:27:52,218 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 21 2017-10-23 16:27:52,218 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:27:52,218 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 22 to run in 60 seconds 2017-10-23 16:27:52,331 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 164 2017-10-23 16:27:52,332 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 172 to run in 60 seconds 2017-10-23 16:27:52,667 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:27:52,668 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:27:52,668 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:27:52,668 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:27:53,009 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 170 2017-10-23 16:27:53,009 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:27:53,009 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:27:53,009 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:27:53,009 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:27:53,009 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:27:53,009 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:27:53,009 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 173 to run in 10 seconds 2017-10-23 16:28:03,010 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 173 2017-10-23 16:28:03,010 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:03,011 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:03,011 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:03,011 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:03,012 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:03,012 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:03,012 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 174 to run in 10 seconds 2017-10-23 16:28:07,672 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:28:07,672 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:28:07,672 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:28:07,672 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:28:13,020 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 174 2017-10-23 16:28:13,020 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:13,020 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:13,020 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:13,020 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:13,021 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:13,021 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:13,021 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 175 to run in 10 seconds 2017-10-23 16:28:22,677 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:28:22,677 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:28:22,677 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:28:22,677 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:28:23,032 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 175 2017-10-23 16:28:23,032 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:23,032 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:23,032 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:23,032 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:23,032 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:23,032 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:23,032 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 176 to run in 10 seconds 2017-10-23 16:28:33,038 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 176 2017-10-23 16:28:33,038 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:33,039 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:33,039 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:33,039 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:33,039 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:33,039 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:33,039 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 177 to run in 10 seconds 2017-10-23 16:28:37,683 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:28:37,683 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:28:37,683 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:28:37,684 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:28:43,051 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 177 2017-10-23 16:28:43,051 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:43,051 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:43,052 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:43,052 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:43,052 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:43,052 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:43,052 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 178 to run in 10 seconds 2017-10-23 16:28:52,226 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 171 2017-10-23 16:28:52,227 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:28:52,227 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:52,227 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 179 to run in 60 seconds 2017-10-23 16:28:52,227 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 22 2017-10-23 16:28:52,227 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:28:52,227 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 23 to run in 60 seconds 2017-10-23 16:28:52,342 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 172 2017-10-23 16:28:52,343 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 180 to run in 60 seconds 2017-10-23 16:28:52,684 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:28:52,684 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:28:52,684 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:28:52,685 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:28:53,059 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 178 2017-10-23 16:28:53,059 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:28:53,059 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:28:53,059 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:28:53,059 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:28:53,060 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:28:53,060 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:28:53,060 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 181 to run in 10 seconds 2017-10-23 16:29:03,070 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 181 2017-10-23 16:29:03,070 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:03,070 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:03,070 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:03,071 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:03,071 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:03,071 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:03,071 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 182 to run in 10 seconds 2017-10-23 16:29:07,689 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:29:07,689 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:29:07,690 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:29:07,690 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:29:13,075 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 182 2017-10-23 16:29:13,075 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:13,075 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:13,076 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:13,076 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:13,076 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:13,076 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:13,076 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 183 to run in 10 seconds 2017-10-23 16:29:22,689 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:29:22,690 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:29:22,690 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:29:22,691 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:29:23,081 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 183 2017-10-23 16:29:23,081 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:23,081 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:23,081 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:23,082 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:23,082 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:23,082 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:23,082 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 184 to run in 10 seconds 2017-10-23 16:29:33,083 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 184 2017-10-23 16:29:33,083 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:33,083 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:33,083 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:33,083 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:33,083 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:33,084 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:33,084 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 185 to run in 10 seconds 2017-10-23 16:29:37,696 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:29:37,696 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:29:37,696 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:29:37,697 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:29:43,085 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 185 2017-10-23 16:29:43,085 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:43,085 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:43,086 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:43,086 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:43,086 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:43,086 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:43,086 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 186 to run in 10 seconds 2017-10-23 16:29:52,228 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 179 2017-10-23 16:29:52,228 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:29:52,228 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:52,228 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 187 to run in 60 seconds 2017-10-23 16:29:52,228 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 23 2017-10-23 16:29:52,228 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:29:52,228 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 24 to run in 60 seconds 2017-10-23 16:29:52,345 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 180 2017-10-23 16:29:52,346 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 188 to run in 60 seconds 2017-10-23 16:29:52,702 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:29:52,702 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:29:52,703 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:29:52,703 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:29:53,094 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 186 2017-10-23 16:29:53,094 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:29:53,095 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:29:53,095 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:29:53,095 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:29:53,095 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:29:53,095 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:29:53,096 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 189 to run in 10 seconds 2017-10-23 16:30:03,107 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 189 2017-10-23 16:30:03,107 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:03,107 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:03,108 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:03,108 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:03,108 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:03,108 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:03,108 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 190 to run in 10 seconds 2017-10-23 16:30:07,710 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:30:07,711 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:30:07,711 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:30:07,712 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:30:13,119 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 190 2017-10-23 16:30:13,119 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:13,119 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:13,119 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:13,119 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:13,120 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:13,120 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:13,120 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 191 to run in 10 seconds 2017-10-23 16:30:22,721 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:30:22,721 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:30:22,722 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:30:22,723 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:30:23,132 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 191 2017-10-23 16:30:23,132 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:23,132 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:23,132 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:23,132 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:23,132 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:23,132 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:23,132 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 192 to run in 10 seconds 2017-10-23 16:30:33,139 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 192 2017-10-23 16:30:33,140 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:33,140 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:33,140 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:33,140 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:33,140 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:33,140 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:33,140 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 193 to run in 10 seconds 2017-10-23 16:30:37,723 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:30:37,723 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:30:37,723 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:30:37,724 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:30:43,143 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 193 2017-10-23 16:30:43,143 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:43,143 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:43,143 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:43,144 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:43,144 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:43,144 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:43,144 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 194 to run in 10 seconds 2017-10-23 16:30:52,231 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 187 2017-10-23 16:30:52,231 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:30:52,231 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:52,231 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 195 to run in 60 seconds 2017-10-23 16:30:52,232 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 24 2017-10-23 16:30:52,232 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:30:52,232 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 25 to run in 60 seconds 2017-10-23 16:30:52,357 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 188 2017-10-23 16:30:52,358 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 196 to run in 60 seconds 2017-10-23 16:30:52,730 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:30:52,730 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:30:52,731 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:30:52,731 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:30:53,149 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 194 2017-10-23 16:30:53,149 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:30:53,149 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:30:53,149 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:30:53,149 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:30:53,149 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:30:53,149 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:30:53,149 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 197 to run in 10 seconds 2017-10-23 16:31:03,155 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 197 2017-10-23 16:31:03,155 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:03,156 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:03,156 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:03,156 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:03,156 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:03,156 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:03,156 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 198 to run in 10 seconds 2017-10-23 16:31:07,735 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:31:07,736 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:31:07,736 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:31:07,736 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:31:13,157 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 198 2017-10-23 16:31:13,157 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:13,157 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:13,157 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:13,157 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:13,158 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:13,158 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:13,158 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 199 to run in 10 seconds 2017-10-23 16:31:22,736 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:31:22,737 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:31:22,737 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:31:22,737 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:31:23,159 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 199 2017-10-23 16:31:23,159 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:23,160 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:23,160 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:23,160 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:23,160 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:23,160 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:23,160 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 200 to run in 10 seconds 2017-10-23 16:31:33,161 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 200 2017-10-23 16:31:33,161 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:33,161 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:33,162 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:33,162 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:33,162 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:33,162 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:33,162 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 201 to run in 10 seconds 2017-10-23 16:31:37,738 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:31:37,738 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:31:37,738 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:31:37,738 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:31:43,172 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 201 2017-10-23 16:31:43,173 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:43,175 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:43,176 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:43,176 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:43,176 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:43,176 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:43,177 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 202 to run in 10 seconds 2017-10-23 16:31:52,241 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 195 2017-10-23 16:31:52,242 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:31:52,242 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:52,242 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 203 to run in 60 seconds 2017-10-23 16:31:52,242 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 25 2017-10-23 16:31:52,242 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:31:52,242 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 26 to run in 60 seconds 2017-10-23 16:31:52,359 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 196 2017-10-23 16:31:52,360 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 204 to run in 60 seconds 2017-10-23 16:31:52,746 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:31:52,746 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:31:52,746 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:31:52,747 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:31:53,180 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 202 2017-10-23 16:31:53,180 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:31:53,180 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:31:53,181 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:31:53,181 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:31:53,181 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:31:53,181 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:31:53,181 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 205 to run in 10 seconds 2017-10-23 16:32:03,196 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 205 2017-10-23 16:32:03,196 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:03,196 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:03,196 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:03,196 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:03,197 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:03,197 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:03,197 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 206 to run in 10 seconds 2017-10-23 16:32:07,759 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:32:07,759 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:32:07,759 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:32:07,760 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:32:13,203 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 206 2017-10-23 16:32:13,204 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:13,204 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:13,204 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:13,204 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:13,205 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:13,205 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:13,205 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 207 to run in 10 seconds 2017-10-23 16:32:22,767 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:32:22,767 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:32:22,767 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:32:22,768 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:32:23,207 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 207 2017-10-23 16:32:23,207 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:23,207 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:23,207 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:23,207 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:23,208 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:23,208 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:23,208 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 208 to run in 10 seconds 2017-10-23 16:32:33,218 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 208 2017-10-23 16:32:33,218 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:33,218 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:33,218 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:33,218 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:33,219 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:33,219 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:33,219 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 209 to run in 10 seconds 2017-10-23 16:32:37,769 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:32:37,769 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:32:37,770 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:32:37,770 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:32:43,221 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 209 2017-10-23 16:32:43,221 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:43,221 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:43,221 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:43,221 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:43,221 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:43,222 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:43,222 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 210 to run in 10 seconds 2017-10-23 16:32:52,243 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 203 2017-10-23 16:32:52,243 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:32:52,243 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:52,244 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 211 to run in 60 seconds 2017-10-23 16:32:52,244 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 26 2017-10-23 16:32:52,244 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:32:52,244 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 27 to run in 60 seconds 2017-10-23 16:32:52,368 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 204 2017-10-23 16:32:52,368 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 212 to run in 60 seconds 2017-10-23 16:32:52,771 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:32:52,772 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:32:52,772 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:32:52,772 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:32:53,228 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 210 2017-10-23 16:32:53,229 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:32:53,229 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:32:53,229 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:32:53,229 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:32:53,229 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:32:53,230 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:32:53,230 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 213 to run in 10 seconds 2017-10-23 16:33:03,235 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 213 2017-10-23 16:33:03,235 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:03,235 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:03,235 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:03,236 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:03,236 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:03,236 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:03,236 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 214 to run in 10 seconds 2017-10-23 16:33:07,772 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:33:07,772 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:33:07,772 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:33:07,772 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:33:13,247 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 214 2017-10-23 16:33:13,247 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:13,247 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:13,247 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:13,247 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:13,247 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:13,247 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:13,247 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 215 to run in 10 seconds 2017-10-23 16:33:22,780 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:33:22,780 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:33:22,781 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:33:22,781 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:33:23,249 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 215 2017-10-23 16:33:23,250 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:23,250 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:23,250 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:23,250 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:23,250 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:23,250 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:23,250 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 216 to run in 10 seconds 2017-10-23 16:33:33,262 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 216 2017-10-23 16:33:33,263 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:33,263 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:33,263 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:33,263 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:33,263 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:33,263 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:33,263 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 217 to run in 10 seconds 2017-10-23 16:33:37,787 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:33:37,787 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:33:37,787 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:33:37,787 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:33:43,267 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 217 2017-10-23 16:33:43,267 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:43,267 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:43,267 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:43,267 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:43,268 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:43,268 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:43,268 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 218 to run in 10 seconds 2017-10-23 16:33:52,244 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 211 2017-10-23 16:33:52,244 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:33:52,244 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:52,244 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 219 to run in 60 seconds 2017-10-23 16:33:52,244 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 27 2017-10-23 16:33:52,244 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:33:52,244 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 28 to run in 60 seconds 2017-10-23 16:33:52,379 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 212 2017-10-23 16:33:52,380 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 220 to run in 60 seconds 2017-10-23 16:33:52,791 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:33:52,791 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:33:52,791 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:33:52,791 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:33:53,279 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 218 2017-10-23 16:33:53,279 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:33:53,279 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:33:53,279 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:33:53,279 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:33:53,279 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:33:53,279 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:33:53,279 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 221 to run in 10 seconds 2017-10-23 16:34:03,291 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 221 2017-10-23 16:34:03,291 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:03,292 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:03,292 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:03,292 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:03,292 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:03,292 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:03,292 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 222 to run in 10 seconds 2017-10-23 16:34:07,797 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:34:07,797 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:34:07,797 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:34:07,798 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:34:13,302 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 222 2017-10-23 16:34:13,302 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:13,302 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:13,303 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:13,303 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:13,303 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:13,303 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:13,303 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 223 to run in 10 seconds 2017-10-23 16:34:22,806 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:34:22,806 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:34:22,806 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:34:22,806 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:34:23,307 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 223 2017-10-23 16:34:23,308 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:23,309 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:23,309 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:23,309 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:23,309 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:23,309 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:23,309 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 224 to run in 10 seconds 2017-10-23 16:34:33,317 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 224 2017-10-23 16:34:33,317 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:33,317 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:33,317 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:33,317 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:33,317 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:33,317 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:33,317 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 225 to run in 10 seconds 2017-10-23 16:34:37,809 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:34:37,809 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:34:37,810 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:34:37,810 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:34:43,321 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 225 2017-10-23 16:34:43,321 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:43,321 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:43,321 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:43,321 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:43,322 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:43,322 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:43,322 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 226 to run in 10 seconds 2017-10-23 16:34:52,246 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 219 2017-10-23 16:34:52,246 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:34:52,247 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:52,247 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 227 to run in 60 seconds 2017-10-23 16:34:52,247 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 28 2017-10-23 16:34:52,247 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:34:52,247 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 29 to run in 60 seconds 2017-10-23 16:34:52,382 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 220 2017-10-23 16:34:52,383 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 228 to run in 60 seconds 2017-10-23 16:34:52,814 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:34:52,814 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:34:52,814 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:34:52,815 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:34:53,322 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 226 2017-10-23 16:34:53,323 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:34:53,323 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:34:53,323 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:34:53,323 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:34:53,323 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:34:53,323 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:34:53,324 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 229 to run in 10 seconds 2017-10-23 16:35:03,324 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 229 2017-10-23 16:35:03,324 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:03,324 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:03,325 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:03,325 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:03,325 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:03,325 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:03,325 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 230 to run in 10 seconds 2017-10-23 16:35:07,823 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:35:07,823 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:35:07,823 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:35:07,824 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:35:13,327 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 230 2017-10-23 16:35:13,327 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:13,327 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:13,327 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:13,327 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:13,328 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:13,328 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:13,328 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 231 to run in 10 seconds 2017-10-23 16:35:22,830 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:35:22,830 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:35:22,830 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:35:22,831 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:35:23,338 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 231 2017-10-23 16:35:23,339 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:23,339 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:23,339 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:23,339 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:23,339 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:23,340 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:23,340 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 232 to run in 10 seconds 2017-10-23 16:35:33,344 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 232 2017-10-23 16:35:33,344 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:33,344 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:33,345 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:33,345 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:33,345 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:33,345 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:33,345 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 233 to run in 10 seconds 2017-10-23 16:35:37,841 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:35:37,842 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:35:37,842 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:35:37,843 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:35:43,349 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 233 2017-10-23 16:35:43,350 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:43,350 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:43,351 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:43,351 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:43,351 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:43,351 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:43,352 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 234 to run in 10 seconds 2017-10-23 16:35:52,260 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 227 2017-10-23 16:35:52,260 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:35:52,260 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:52,260 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 235 to run in 60 seconds 2017-10-23 16:35:52,260 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 29 2017-10-23 16:35:52,260 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:35:52,261 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 30 to run in 60 seconds 2017-10-23 16:35:52,386 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 228 2017-10-23 16:35:52,387 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 236 to run in 60 seconds 2017-10-23 16:35:52,854 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:35:52,854 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:35:52,854 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:35:52,855 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:35:53,355 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 234 2017-10-23 16:35:53,356 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:35:53,356 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:35:53,356 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:35:53,356 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:35:53,356 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:35:53,356 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:35:53,356 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 237 to run in 10 seconds 2017-10-23 16:36:03,366 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 237 2017-10-23 16:36:03,366 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:03,367 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:03,367 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:03,367 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:03,367 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:03,367 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:03,367 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 238 to run in 10 seconds 2017-10-23 16:36:07,859 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:36:07,859 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:36:07,859 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:36:07,860 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:36:13,377 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 238 2017-10-23 16:36:13,378 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:13,378 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:13,378 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:13,378 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:13,378 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:13,378 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:13,378 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 239 to run in 10 seconds 2017-10-23 16:36:22,867 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:36:22,867 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:36:22,867 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:36:22,867 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:36:23,387 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 239 2017-10-23 16:36:23,388 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:23,388 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:23,388 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:23,388 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:23,388 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:23,388 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:23,388 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 240 to run in 10 seconds 2017-10-23 16:36:33,395 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 240 2017-10-23 16:36:33,395 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:33,395 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:33,395 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:33,395 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:33,395 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:33,395 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:33,396 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 241 to run in 10 seconds 2017-10-23 16:36:37,877 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:36:37,877 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:36:37,877 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:36:37,878 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:36:43,400 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 241 2017-10-23 16:36:43,400 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:43,401 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:43,401 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:43,401 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:43,401 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:43,402 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:43,402 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 242 to run in 10 seconds 2017-10-23 16:36:52,266 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 235 2017-10-23 16:36:52,267 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:36:52,267 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:52,267 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 243 to run in 60 seconds 2017-10-23 16:36:52,267 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 30 2017-10-23 16:36:52,267 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:36:52,267 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 31 to run in 60 seconds 2017-10-23 16:36:52,390 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 236 2017-10-23 16:36:52,390 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 244 to run in 60 seconds 2017-10-23 16:36:52,889 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:36:52,889 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:36:52,889 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:36:52,890 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:36:53,410 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 242 2017-10-23 16:36:53,411 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:36:53,411 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:36:53,411 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:36:53,411 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:36:53,411 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:36:53,412 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:36:53,412 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 245 to run in 10 seconds 2017-10-23 16:37:03,420 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 245 2017-10-23 16:37:03,420 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:03,420 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:03,421 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:03,421 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:03,421 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:03,421 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:03,421 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 246 to run in 10 seconds 2017-10-23 16:37:07,900 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:37:07,901 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:37:07,901 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:37:07,901 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:37:13,428 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 246 2017-10-23 16:37:13,428 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:13,430 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:13,430 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:13,430 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:13,430 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:13,430 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:13,430 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 247 to run in 10 seconds 2017-10-23 16:37:22,901 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:37:22,901 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:37:22,901 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:37:22,902 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:37:23,433 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 247 2017-10-23 16:37:23,434 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:23,434 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:23,434 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:23,434 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:23,434 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:23,434 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:23,434 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 248 to run in 10 seconds 2017-10-23 16:37:33,441 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 248 2017-10-23 16:37:33,441 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:33,441 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:33,441 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:33,441 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:33,441 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:33,441 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:33,442 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 249 to run in 10 seconds 2017-10-23 16:37:37,912 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:37:37,913 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:37:37,913 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:37:37,913 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:37:43,451 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 249 2017-10-23 16:37:43,451 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:43,453 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:43,453 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:43,453 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:43,453 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:43,453 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:43,453 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 250 to run in 10 seconds 2017-10-23 16:37:52,277 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 243 2017-10-23 16:37:52,277 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:37:52,277 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:52,277 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 251 to run in 60 seconds 2017-10-23 16:37:52,277 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 31 2017-10-23 16:37:52,277 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:37:52,277 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 32 to run in 60 seconds 2017-10-23 16:37:52,395 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 244 2017-10-23 16:37:52,396 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 252 to run in 60 seconds 2017-10-23 16:37:52,913 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:37:52,913 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:37:52,913 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:37:52,913 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:37:53,454 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 250 2017-10-23 16:37:53,455 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:37:53,455 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:37:53,455 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:37:53,455 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:37:53,455 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:37:53,455 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:37:53,455 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 253 to run in 10 seconds 2017-10-23 16:38:03,460 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 253 2017-10-23 16:38:03,460 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:03,460 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:03,461 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:03,461 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:03,461 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:03,461 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:03,461 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 254 to run in 10 seconds 2017-10-23 16:38:07,918 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:38:07,918 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:38:07,918 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:38:07,919 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:38:13,472 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 254 2017-10-23 16:38:13,472 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:13,472 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:13,473 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:13,473 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:13,473 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:13,473 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:13,473 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 255 to run in 10 seconds 2017-10-23 16:38:22,926 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:38:22,926 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:38:22,927 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:38:22,927 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:38:23,474 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 255 2017-10-23 16:38:23,474 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:23,474 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:23,475 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:23,475 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:23,475 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:23,475 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:23,475 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 256 to run in 10 seconds 2017-10-23 16:38:33,482 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 256 2017-10-23 16:38:33,482 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:33,482 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:33,482 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:33,483 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:33,483 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:33,483 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:33,483 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 257 to run in 10 seconds 2017-10-23 16:38:37,928 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:38:37,928 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:38:37,929 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:38:37,929 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:38:43,492 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 257 2017-10-23 16:38:43,492 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:43,492 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:43,492 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:43,492 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:43,492 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:43,493 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:43,493 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 258 to run in 10 seconds 2017-10-23 16:38:52,284 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkNodeRequestSpike with id 251 2017-10-23 16:38:52,284 | DEBUG | node.py (1999) | checkNodeRequestSpike | Node1 checking its request amount 2017-10-23 16:38:52,284 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:52,284 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkNodeRequestSpike with id 259 to run in 60 seconds 2017-10-23 16:38:52,285 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 32 2017-10-23 16:38:52,285 | DEBUG | notifier_plugin_manager.py ( 74) | sendMessageUponSuspiciousSpike | Not enough data to detect a ClusterThroughputSuspiciousSpike spike 2017-10-23 16:38:52,285 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 33 to run in 60 seconds 2017-10-23 16:38:52,406 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action dump_json_file with id 252 2017-10-23 16:38:52,407 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action dump_json_file with id 260 to run in 60 seconds 2017-10-23 16:38:52,940 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:38:52,940 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:38:52,940 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:38:52,940 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:38:53,503 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 258 2017-10-23 16:38:53,503 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:38:53,503 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:38:53,503 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:38:53,504 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:38:53,504 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:38:53,504 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:38:53,504 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 261 to run in 10 seconds 2017-10-23 16:39:03,514 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 261 2017-10-23 16:39:03,514 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:39:03,515 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:39:03,515 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:39:03,515 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:39:03,516 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:39:03,516 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:39:03,517 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 262 to run in 10 seconds 2017-10-23 16:39:07,941 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:39:07,941 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:39:07,942 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:39:07,942 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:39:13,521 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 262 2017-10-23 16:39:13,521 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:39:13,521 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:39:13,521 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:39:13,521 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:39:13,522 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:39:13,522 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:39:13,522 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 263 to run in 10 seconds 2017-10-23 16:39:22,952 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:39:22,952 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:39:22,952 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:39:22,952 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:39:23,524 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 263 2017-10-23 16:39:23,524 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:39:23,524 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:39:23,525 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:39:23,525 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:39:23,525 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:39:23,525 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:39:23,525 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 264 to run in 10 seconds 2017-10-23 16:39:33,537 | TRACE | has_action_queue.py ( 66) | _serviceActions | Node1 running action checkPerformance with id 264 2017-10-23 16:39:33,538 | TRACE | node.py (1978) | checkPerformance | Node1 checking its performance 2017-10-23 16:39:33,538 | DEBUG | notifier_plugin_manager.py ( 80) | sendMessageUponSuspiciousSpike | NodeRequestSuspiciousSpike: New value 0 is within bounds. Average: 0.0 2017-10-23 16:39:33,538 | DEBUG | monitor.py ( 335) | isMasterThroughputTooLow | Node1 master throughput is not measurable. 2017-10-23 16:39:33,538 | TRACE | monitor.py ( 361) | isMasterReqLatencyTooHigh | Node1 found master's latency to be lower than the threshold for all requests. 2017-10-23 16:39:33,538 | TRACE | monitor.py ( 391) | isMasterAvgReqLatencyTooHigh | Node1 found difference between master and backups avg latencies to be acceptable 2017-10-23 16:39:33,538 | DEBUG | node.py (1995) | checkPerformance | Node1's master has higher performance than backups 2017-10-23 16:39:33,538 | TRACE | has_action_queue.py ( 36) | _schedule | Node1 scheduling action checkPerformance with id 265 to run in 10 seconds 2017-10-23 16:39:37,961 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node3 HA(host='10.0.0.4', port=9705) 2017-10-23 16:39:37,961 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node2 HA(host='10.0.0.3', port=9703) 2017-10-23 16:39:37,961 | DEBUG | kit_zstack.py ( 67) | reconcileNodeReg | Node1 matched remote Node4 HA(host='10.0.0.5', port=9707) 2017-10-23 16:39:37,962 | DEBUG | kit_zstack.py ( 50) | maintainConnections | Node1 next check for retries in 15.00 seconds 2017-10-23 16:39:43,265 | TRACE | zstack.py ( 479) | _receiveFromListener | Node1C got 1 messages through listener 2017-10-23 16:39:43,268 | DEBUG | node.py (2236) | verifySignature | Node1 authenticated V4SGRU86Z58d6TV7PBUe6f signature on request 1508776783249274 2017-10-23 16:39:43,268 | TRACE | node.py (1388) | validateClientMsg | Node1C received CLIENT message: SafeRequest: {'reqId': 1508776783249274, 'signature': 'EejcE22puEWaVFYPWeEEm89YhBeBTvcwStJbP7Wsdenfxc9n5FCew1GFYwUnJsFSPW7YatjmJPYuamAYfagBQrN', 'operation': {'type': '109', 'force': True, 'justification': None, 'schedule': {'4Tn3wZMNCvhSTXPcLinQDnHyj56DTLQtL61ki4jo2Loc': '2017-10-17T11:40:00.000000+00:00', 'Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv': '2017-10-17T11:20:00.000000+00:00', '4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA': '2017-10-17T11:35:00.000000+00:00', 'DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya': '2017-10-17T11:30:00.000000+00:00', '8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb': '2017-10-17T11:25:00.000000+00:00'}, 'name': 'upgrade-1143', 'sha256': 'f6f2ea8f45d8a057c9566a33f99474da2e5c6a6604d736121650e2730c6fb0a3', 'version': '1.1.43', 'action': 'start', 'timeout': 10}, 'identifier': 'V4SGRU86Z58d6TV7PBUe6f'} 2017-10-23 16:39:43,268 | DEBUG | node.py (1434) | processClientInBox | Node1C processing b'xo5JUY.S$6PWbpmz5XzA