PA failover

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

PA failover

L4 Transporter

Hi,

 

We have a cluster PA having problems, we give supports to several clients and some of them are having weird problems since 2 days ago. Strange PA behavior, HA failover and file create in "show system files".

 

These are the logs. why is it happening?

 

moreTime Severity Subtype Object EventID ID Description
===============================================================================

2016/08/25 05:45:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 05:46:08 high general general 0 9: path_monitor HB failures seen, triggering HA DP down
2016/08/25 05:46:08 critical ha datapla 0 HA Group 1: Dataplane is down: path monitor failure
2016/08/25 05:46:08 critical ha state-c 0 HA Group 1: Moved from state Active to state Non-Functional
2016/08/25 05:46:08 critical general general 0 Chassis Master Alarm: HA-event
2016/08/25 05:46:43 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:46:43 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:46:44 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:46:44 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:46:44 critical general general 0 The dataplane is restarting.
2016/08/25 05:46:54 high ha ha2-lin 0 HA2 peer link down
2016/08/25 05:46:54 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:46:54 critical ha ha2-lin 0 All HA2 links down
2016/08/25 05:47:57 critical ha ha2-lin 0 HA2 link down
2016/08/25 05:48:03 high general general 0 Dataplane is now up
2016/08/25 05:48:10 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:48:42 critical general general 0 Chassis Master Alarm: Cleared
2016/08/25 05:50:29 high general general 0 19: path_monitor HB failures seen, triggering HA DP down
2016/08/25 05:50:29 critical ha datapla 0 HA Group 1: Dataplane is down: path monitor failure
2016/08/25 05:50:29 high ha state-c 0 HA Group 1: Moved from state Passive to state Non-Functional
2016/08/25 05:50:29 critical general general 0 Chassis Master Alarm: HA-event
2016/08/25 05:50:53 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:50:53 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:50:54 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:50:54 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:50:54 critical general general 0 The dataplane is restarting.
2016/08/25 05:51:03 high ha ha2-lin 0 HA2 peer link down
2016/08/25 05:51:03 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:51:03 critical ha ha2-lin 0 All HA2 links down
2016/08/25 05:52:06 critical ha ha2-lin 0 HA2 link down
2016/08/25 05:52:11 high general general 0 Dataplane is now up
2016/08/25 05:52:18 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:52:50 critical general general 0 Chassis Master Alarm: Cleared
2016/08/25 05:54:37 high general general 0 19: path_monitor HB failures seen, triggering HA DP down
2016/08/25 05:54:37 critical ha datapla 0 HA Group 1: Dataplane is down: path monitor failure
2016/08/25 05:54:37 high ha state-c 0 HA Group 1: Moved from state Passive to state Non-Functional
2016/08/25 05:54:37 critical general general 0 Chassis Master Alarm: HA-event
2016/08/25 05:55:02 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:55:02 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:55:03 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:55:03 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:55:03 critical general general 0 The dataplane is restarting.
2016/08/25 05:55:12 high ha ha2-lin 0 HA2 peer link down
2016/08/25 05:55:12 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:55:12 critical ha ha2-lin 0 All HA2 links down
2016/08/25 05:56:17 critical ha ha2-lin 0 HA2 link down
2016/08/25 05:56:21 high general general 0 Dataplane is now up
2016/08/25 05:56:29 high ha session 0 HA Group 1: Ignoring session synchronization due to HA2-unavailable
2016/08/25 05:57:01 critical general general 0 Chassis Master Alarm: Cleared
2016/08/25 05:58:48 high general general 0 19: path_monitor HB failures seen, triggering HA DP down
2016/08/25 05:58:48 critical ha datapla 0 HA Group 1: Dataplane is down: path monitor failure
2016/08/25 05:58:48 high ha state-c 0 HA Group 1: Moved from state Passive to state Non-Functional
2016/08/25 05:58:48 critical general general 0 Chassis Master Alarm: HA-event
2016/08/25 05:59:12 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:59:12 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:59:13 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 The dataplane is restarting.
2016/08/25 05:59:22 critical general general 0 data_plane: restarts exhausted, rebooting system
more2016/08/25 06:02:48 high general system- 1 The system is starting up.
2016/08/25 06:03:05 high general general 0 Dataplane is now up
2016/08/25 06:03:57 high ha config- 0 HA Group 1: Commit on peer device with running configuration not synchronized; synchronize manually
2016/08/25 06:04:02 high ha peer-ve 0 HA Group 1: Anti-Virus version does not match
2016/08/25 06:04:02 high ha peer-ve 0 HA Group 1: Global Protect Client Software version does not match
2016/08/25 06:04:02 high ha peer-ve 0 HA Group 1: Application Content version does not match
2016/08/25 06:04:02 high ha peer-ve 0 HA Group 1: Threat Content version does not match
2016/08/25 06:04:05 high ha peer-ve 0 HA Group 1: Anti-Virus version does not match
2016/08/25 06:04:05 high ha peer-ve 0 HA Group 1: Global Protect Client Software version does not match
2016/08/25 06:04:05 high ha peer-ve 0 HA Group 1: Application Content version does not match
2016/08/25 06:04:05 high ha peer-ve 0 HA Group 1: Threat Content version does not match
2016/08/25 06:15:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 06:30:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 06:45:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 07:00:32 high general general 0 Failed to check Antivirus content upgrade info due to generic communication error
2016/08/25 07:00:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 07:15:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 07:30:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 07:45:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 08:00:32 high general general 0 Failed to check Antivirus content upgrade info due to generic communication error
2016/08/25 08:00:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 08:15:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 08:30:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 08:45:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 09:00:32 high general general 0 Failed to check Antivirus content upgrade info due to generic communication error
2016/08/25 09:00:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 09:15:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error
2016/08/25 09:30:52 high general general 0 Failed to check WildFire content upgrade info due to generic communication error

show system files

/var/cores/:
total 4.0K
drwxrwxrwx 2 root root 4.0K Aug 23 10:16 crashinfo

/var/cores/crashinfo:
total 0

/opt/dpfs/var/cores/:
total 4.7M
drwxrwxrwx 2 root root 4.0K Aug 25 06:02 old
drwxrwxrwx 2 root root 4.0K Aug 25 06:03 crashinfo
-rw-r--r-- 1 root root 441K Aug 25 06:15 all_pktproc_3_7.0.9_1.tar.gz
-rw-r--r-- 1 root root 442K Aug 25 06:15 flow_mgmt_7.0.9_3.tar.gz
-rw-r--r-- 1 root root 458K Aug 25 06:15 flow_mgmt_7.0.9_0.tar.gz
-rw-r--r-- 1 root root 441K Aug 25 06:15 flow_mgmt_7.0.9_2.tar.gz
-rw-r--r-- 1 root root 442K Aug 25 06:15 all_pktproc_3_7.0.9_2.tar.gz
-rw-r--r-- 1 root root 1019K Aug 25 06:15 pktproc_n_log_7.0.9_0.tar.gz
-rw-r--r-- 1 root root 442K Aug 25 06:15 flow_mgmt_7.0.9_1.tar.gz
-rw-r--r-- 1 root root 544K Aug 25 06:15 pktproc_n_log_7.0.9_1.tar.gz
-rw-r--r-- 1 root root 465K Aug 25 06:15 all_pktproc_3_7.0.9_0.tar.gz

/opt/dpfs/var/cores/old:
total 4.6M
-rw-rw-rw- 1 root root 9.2M Aug 25 05:59 core.1285
-rw-rw-rw- 1 root root 9.2M Aug 25 05:59 core.1292

/opt/dpfs/var/cores/crashinfo:
total 216K
-rw-r--r-- 1 root root 24 Aug 25 05:47 pktproc_n_log_7.0.9_0.pcap
-rw-r--r-- 1 root root 24 Aug 25 05:48 all_pktproc_3_7.0.9_0.pcap
-rw-r--r-- 1 root root 24 Aug 25 05:48 flow_mgmt_7.0.9_0.pcap
-rw-rw-rw- 1 root root 18K Aug 25 05:48 pktproc_n_log_7.0.9_0.info
-rw-rw-rw- 1 root root 19K Aug 25 05:48 all_pktproc_3_7.0.9_0.info
-rw-rw-rw- 1 root root 19K Aug 25 05:48 flow_mgmt_7.0.9_0.info
-rw-r--r-- 1 root root 24 Aug 25 05:52 all_pktproc_3_7.0.9_1.pcap
-rw-r--r-- 1 root root 24 Aug 25 05:52 flow_mgmt_7.0.9_1.pcap
-rw-rw-rw- 1 root root 19K Aug 25 05:52 all_pktproc_3_7.0.9_1.info
-rw-rw-rw- 1 root root 19K Aug 25 05:52 flow_mgmt_7.0.9_1.info
-rw-r--r-- 1 root root 24 Aug 25 05:56 all_pktproc_3_7.0.9_2.pcap
-rw-r--r-- 1 root root 24 Aug 25 05:56 flow_mgmt_7.0.9_2.pcap
-rw-rw-rw- 1 root root 19K Aug 25 05:56 all_pktproc_3_7.0.9_2.info
-rw-rw-rw- 1 root root 18K Aug 25 05:56 flow_mgmt_7.0.9_2.info
-rw-r--r-- 1 root root 24 Aug 25 06:02 pktproc_n_log_7.0.9_1.pcap
-rw-r--r-- 1 root root 24 Aug 25 06:03 flow_mgmt_7.0.9_3.pcap
-rw-rw-rw- 1 root root 19K Aug 25 06:03 pktproc_n_log_7.0.9_1.info
-rw-rw-rw- 1 root root 19K Aug 25 06:03 flow_mgmt_7.0.9_3.info

 

 

1 accepted solution

Accepted Solutions

L4 Transporter

Hi,

 

The log states that the firewall is crashing because of an internal path monitor failure, this could be hardware related or software related, you would need to dig deeper into the logs.

 

The troubleshooting hardware guide is below:

https://live.paloaltonetworks.com/t5/Learning-Articles/Troubleshooting-Palo-Alto-Networks-Hardware-I...

 

I recommend you open up a case with TAC for further investigation.

 

2016/08/25 05:59:12 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:59:12 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:59:13 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 The dataplane is restarting.
2016/08/25 05:59:22 critical general general 0 data_plane: restarts exhausted, rebooting system

 

 

hope this helps,

Ben

View solution in original post

4 REPLIES 4

L4 Transporter

Hi,

 

The log states that the firewall is crashing because of an internal path monitor failure, this could be hardware related or software related, you would need to dig deeper into the logs.

 

The troubleshooting hardware guide is below:

https://live.paloaltonetworks.com/t5/Learning-Articles/Troubleshooting-Palo-Alto-Networks-Hardware-I...

 

I recommend you open up a case with TAC for further investigation.

 

2016/08/25 05:59:12 critical general general 0 Internal packet path monitoring failure, restarting dataplane
2016/08/25 05:59:12 high general general 0 path_monitor: exiting because service missed too many heartbeats
2016/08/25 05:59:13 critical general general 0 path_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 internal_monitor: Exited 1 times, must be manually recovered.
2016/08/25 05:59:13 critical general general 0 The dataplane is restarting.
2016/08/25 05:59:22 critical general general 0 data_plane: restarts exhausted, rebooting system

 

 

hope this helps,

Ben

Thanks a lot, yes i opend a case with TAC and send techsupport. Is there any file in techsupport i should see in order to see the cause (SW or HW) thanks

check the core files. Useful info there 

L4 Transporter

Hi, 

How was this problem solved? Thanks.

 

  • 1 accepted solution
  • 9084 Views
  • 4 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!