During my stint of little over an year in BPCL, there were occassions when I had to solve major automation issues without much external support. These came in addition to my regular preventive maintenance work. I will recount few of my experiences here through the mails I had written about them to the higher authorities. If you are new to 'Terminal Automation System', this paper gives a really good overview of the technicalities involved - Terminal Automation System
If you want to understand the basics of PLC and learn to program them, here's a fantastic resource I found to continue learning about one of the most important aspects of industrial automation - BasicPLC.com
Date - 13.03.2018
We had major thunderstorm last night which affected the field instruments badly. When we came in the morning, we found our systems powered off. Even after switching on the computers, we were not able to connect to the servers.
Dear Sir, Our automation system is down since this morning. Loading is at halt because we cannot prepare FAN slips. Communication with the servers has been hampered and we are not able to open MMI client. Pinging the TAS servers return following error – Primary Server - ping 192.168.1.81 : request timed out Secondary Server - ping 192.168.1.82 : reply from 192.168.1.61 : Destination host unreachable
Dear Sir, We are facing system communication error in automation since this morning. Due to thunderstorm and lightning yesterday, we are not able to connect with the servers and communication with several instruments has stopped. Our SAP-TAS PC LAN port seems to be damaged as well due to which we are unable to prepare FAN. We are able to open MMI Client but unless we get the Internet working on planning room PC, we cannot open SAP (hence no data-transfer). Yokogawa personnels are in regular communication with RE but resolving the issue will take time. There are around 25 T/Ls to be filled today for Bhutan supply and loading has not started yet. Kindly give us permission to start the operation in manual mode.
Dear Sir, We have changed the Ethernet switch with a spare one and modem LAN connections have been corrected as well. Now the servers are online and communication has been established. Loading is resumed now. As of now, we don’t have any spare Ethernet switch. So if such situation arises, we won’t have redundant system to fall back on. Kindly advice.
Date - 28.03.2018 : UPS Breakdown
Dear Sir, Yesterday, after the UPS break-down due fuse failure, we tried our best to get it working with one spare fuse we had. Whenever we tried to plug it in, we observed sparks in the wiring. Not having much knowledge about its internal architecture, we didn’t meddle with it further and called the Hitachi vendor. He told us that they aren’t available in Siliguri and it would take time for them reach here. Meanwhile, we tried to get the auxiliary UPS working. We bypassed the primary UPS and re-energized the auxiliary UPS according to the procedure written in manual. We started shifting the load on it subsequently in progressive manner (switching on the instruments one by one). Early, the power supply was being made through WBPCB line which was prone to inconsistency. For ensuring the continuous power supply to auxiliary UPS, since it doesn’t give backup, we shifted the power supply to our 250 kVA generator. At the time of starting, it has 900L of Diesel sufficient to run it for 20 hours. Once it was done, we switched on our terminal servers and other essentials to resume the loading operation through automation. There were minor glitches observed in server communication, MMI Client connection and barrier gate operation which we solved shortly. Also we noticed that power supply to ROSOV was interrupted and we had to open it manually. While checking for the reason, we found out that node 3 of our safety PLC was offline. We gave our LRC on remote to Yokogawa personnel and it was rectified yesterday itself. At the end of the day, we switched off all the components and shifted the supply to WBPCB line. This morning we had procured two fuses of the same rating (100A, 700V) from the market. Being it an OEM component, it was hard to get ones exactly the same but we got two of rating 100A, 500V. Both the UPS are working fine now and the load has been back shifted. Hitachi personnel will be coming tomorrow with replacement components and look into the issue, if any. Our RE Sri Traun was really supportive throughout this ordeal. He is relatively new to all this but he solves most of the issues with prompt action. Sri Chandan Kumar Sinha, Management Trainee took tireless effort in reviving the automation system at NJP TOP from yesterday till today. Today we started our loading operation normally under full automation. CC : Chandan : Good initiative.