We use control master in ssh links to try to reuse the connection as much as possible. However, the current design cannot maintain multiple connections correctly. The reason is, the control master is a singleton, and it assumes all requests are targeting to the same SSH link (as the very first one who creates the singleton instance.)
Thus, everything goes fine, until multiple SSH links are established to different target machines. In real world situation, it happens when multiple DUTs are connected via SSH links, or the DUT and the instrument are both using SSH links.
Currently, it's not a disaster just because only one DUT is connected at a time (via the SSH link python class we're talking about), and the instrument connection is handled by another shared library.
To solve this, we just need to rethink a question: When should we share a control master? In concept, if (user, host, port) of two SSH links are identical, then they can share a control master. Otherwise, it should not. A possible solution is to add another layer on the top of the control master class, maybe called a ControlMasterDispatcher. It redirect the request to the correct control master, and create one if it's the very first command.
The main thread of the control master might need some extra care. If the number of concurrent SSH links are expected to be small, then we can spawn a new thread for each SSH link. However, if thread number is an issue, we might need a round-robin queue, select(), or epoll() to deal with it.
Comment 1 by petershih@chromium.org
, Jul 6 2017