Uploaded image for project: 'Indy Node'
  1. Indy Node
  2. INDY-1816

Investigate slowness on TestNet due to demotion



    • Bug
    • Status: Complete
    • High
    • Resolution: Done
    • 1.6.78
    • None
    • None
    • None
    • Unset
    • Ev 18.23


      During testing of Indy Node on the Sovrin Test Network, we experienced some slowness.

      It turned out that there were 12 Nodes on STN, and 3 of them were inactive, so we had exactly n-f nodes. This can be one of the sources (quite expected) of slowness. We want to do sufficient testing to verify that we don't have a serious problem.

      Acceptance criteria

      • Test a similar environment with unavailable nodes to see if performance is acceptable.
        • If not, create a ticket in IS Jira (Indy-SDK) if needed to see how can we deal with unavailable nodes more efficiently.
      • Test the performance of STN to verify that current performance is acceptable.
        • If not, create a ticket to create an action plan.


      • Also it's not yet clear whether the slowness is noticed for write or read requests.
        If we had 3 nodes unreachable, then yes, the pool is very fragile, and we require all nodes for consensus. One or more nodes might have been struggling causing a possible temporary loss in consensus. This can be the starting point for searching the logs.
      • Doug was using the tests from https://github.com/hyperledger/indy-sdk/tree/master/vcx/wrappers/python3/demo
      • If this is for reads, then please note, that read request is sent to 1 node only selected randomly by SDK, so if it turned out that the request is sent to an unreachable node, then response can be quite slow.
      • The logs can be found at https://drive.google.com/open?id=1ZR2pmtfL5nOhWNrkQ-F2VYDeo2EIzDk8


        Issue Links



              VladimirWork Vladimir Shishkin
              ashcherbakov Alexander Shcherbakov
              Alexander Shcherbakov, Richard Esplin, Vladimir Shishkin
              0 Vote for this issue
              3 Start watching this issue