-
Type:
Task
-
Status: Open (View Workflow)
-
Priority:
Normal
-
Resolution: Unresolved
-
Component/s: ics_mlp1Actor
-
Labels:None
Some summit work disconnected the mlp1 port/service from the mlp1Actor. It caught the failure and started trying to reconnect. That loop eventually did succeed a few hours later, but the failure was never explicitly passed on to the agActor or anyone else:
2025-11-18 22:15:26.009Z mlp1 20 mlp1.py:265 xmtr: 2025-11-18 22:15:26.000643+00:00 2025-11-18 22:15:26.349Z mlp1 20 mlp1.py:186 rcvr: 2025-11-18 22:15:26.340773+00:00 2025-11-18 22:15:26.349Z cmds 20 CommandLink.py:122 > 0 0 i telescopeState=0,0,80125.7,-90.017783,89.964921,-0.002726,0,0 2025-11-18 22:15:27.009Z mlp1 20 mlp1.py:265 xmtr: 2025-11-18 22:15:27.000472+00:00 2025-11-18 22:15:33.002Z mlp1 30 mlp1.py:100 xcvr: Receiver not alive 2025-11-18 22:15:33.129Z mlp1 20 mlp1.py:73 xcvr: Transmitter: stop 2025-11-18 22:15:33.224Z mlp1 20 mlp1.py:73 xcvr: Receiver: stop 2025-11-18 22:15:34.566Z mlp1 20 mlp1.py:79 xcvr: (re)start 2025-11-18 22:15:34.962Z mlp1 20 mlp1.py:66 xcvr: Receiver: (re)start 2025-11-18 22:15:34.994Z mlp1 20 mlp1.py:131 Receiver.__del__: 2025-11-18 22:15:35.020Z mlp1 20 mlp1.py:66 xcvr: Transmitter: (re)start 2025-11-18 22:15:35.023Z mlp1 20 mlp1.py:243 Transmitter.__del__: 2025-11-18 22:15:42.000Z mlp1 30 mlp1.py:100 xcvr: Receiver not alive
So catch that better and report it. I'd say we should add a status field to the telescopeState key, with the usual ""OK" vs "mlp1 link down", say.
And either the mlp1Actor or the agActor should probably send a gen2 alert.
Nice that the auto re-connection worked....