- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
03-23-2013 08:20 AM
We have deployed around 10 pairs of PA 2000 platforms in different networks within our environment.
These networks almost generate the same type of traffic. What we experience is that, these firewalls which ever is active, goes in for an automatic reboot. The traffic is not interrupted bcs of the passive device taking over.
We see this in all the pairs. There are no patterns in the time of the reboot.
Anybody have heard/experienced any similar problems??
03-23-2013 08:35 AM
PA 4.1.8 - h3.
We have opened TAC cases. They suggested replacing HDD, which we did.
Even that did not help.Still experienced it
03-23-2013 08:49 AM
if you see issue with all pairs you should try to upgrade your panos first.(if you did not try - first 4.1.10 then if issue occurs again 5.0.3 )There are some bugs which were not listed on release notes that I saw on some environments.
Chaning hdd is not a solution I think, You are talking about 10pairs
03-23-2013 09:11 AM
PA TAC had escalated this to their highest level in PA. They had advised this solution of changing hard disks.
From the console logs, we see that the hard disks were freezing.
And they have not recommended an upgrade yet, bcs they did not see anything from the logs
03-23-2013 09:17 AM
very strange, how could all disks have this problem ?
upgrading panos is revertable, nothing comes to my mind except for that now
03-24-2013 12:07 PM
I would recommend you to try 4.1.11 and see if that resolves anything (at least on one pair).
Previous versions of 4.1 series have had some issues (like high mgmtplane cpu usage if I recall correctly) which is supposed to be fixed in 4.1.11 (or was this only for the PANOS 5.x series where 5.0.3 fixed this)?
Also check stuff like temperature and moisture in the area.
If all 10 pairs were delivered at the same time it could be just some bad batch of harddrives if you are unlucky?
But it could also be a case (specially if all 10 pairs were delivered at same time) that the drives have had too many G-forces applied to them during transport (like if the whole container was dropped from above 1 meter of height or such) or something else that happend during transport.
Also if possible would be to install 5.0.3 on one pair and then perform a factory reset (in case something odd happend to the contents of the drives or such).
03-24-2013 03:02 PM
mikand - the high management plane CPU issue that is fresh in my mind is from the use of the User-ID agent, and that fix is in 4.1.12 (not out yet -
we're upgrading to 4.1.12 as soon as it gets GA released)
03-24-2013 03:19 PM
Ahh yes... I thought something was off there since 4.1.11 came out first week in february and the high mgmtplane cpu thingy has been around in here since then. First resolution seems to be in 5.0.3 released last week.
03-24-2013 03:25 PM
Just FYI support assures me that 4.1.12 will contain the high CPU fix as well... it's due out the last week of April (straight from support)
05-13-2013 02:37 PM
Although our 2050's are on version 5.0.4, this sounds like very similar behavior to what we experienced and what we were told would be fixed in 5.0.5. The bug id is 46721.
05-13-2013 05:25 PM
All, just to clarify, BugID 46721 is for a dataplane (DP) restart while doing SSL decryption. This crash will have a backtrace, if there is no backtrace you may be experiencing a different issue. This is a problem specific to 5.0.x code, it does not happen in 4.1.x or 4.0.x. It is triggered by the presence of reordered packets carrying the SSL records. This typically happens with a high volume of traffic. It is fixed in 5.0.5
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!