- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
10-20-2014 07:18 AM
We are seeing a large amount of url logs being categorized as 'not-resolved' at a rate of about 5500 per hour. After reviewing logs to compare it appears it started a few days previous. What is strange is a site will be categorized as 'not-resolved' but a second or two later it is properly categorized.
For example: www.napaautopro.com category = not -resolved then 2 seconds later www.napaautopro.com category = motor-vehicles
Seems as if we are not being able to check into the PAN-DB and it is timing out. Also when trying to reclassify a site through the firewall we get timeouts. I suspect these symptoms are related.
I have submitted a ticket to support but wanted to see if any other users have experienced this too. We are currently on OS 5.0.10 and our PAN-DB URL subscription is active.
10-20-2014 08:57 AM
Since I have run the restart management command the 'not-resolved' issue no longer exist. Investigating with support now if it is a memory leak. Thanks for everyones input.
10-20-2014 07:34 AM
Hi Lewis,
Issue might be with high management-server and device server. Can you run "show system resources | match srvr" and paste it to the case.
You can try restarting both process. It will not impact your production traffic, will restart the services which handles URL related processes.
debug software restart management-server
debug software restart device-server
Hope this helps. Thank you.
10-20-2014 07:37 AM
Hello Lewis,
The category of a URL is determined in the following top-down approach:
1. Block list
2. Allow list
3. Custom Categories
4. DP URL cache
5. MP URL database(Base DB)
6. MP dyanamic database(Dynamic DB)
7. Cloud server(Cloud DB)---if dynamic url-filtering is enabled in the URL filtering profile.
URLs will show up as no-resolved if there is no response from MP in 5sec(default timeout can be changed) or if MP is unable to connect to the cloud(when it doesn't have that URL in its database).
Make sure the management-server is not too busy. You can restart management-server(typically during off peak hours) and device-server. If its because of MP or device-server, the issue should be fixed here.
>debug software restart management-server
>debug software restart device-server
Also check if the firewall is able to connect to cloud
>show url-cloud status
Hope it helps!
-Dileep
10-20-2014 07:44 AM
Hi Lewis,
For any URL category following sequence of event happens.
1. DP looks for category in DP cache.
2. If it doesnt find than it contacts MP
3. If MP doesnt respond in 5 second than its classified as "Not-Resolved"
Please provide followinng output.
1. show counter global filter delta yes | match url
2. After 30 Second Execute same command again " show counter global filter delta yes | match url"
3. After 30 Second Execute same command again " show counter global filter delta yes | match url"
Provide me output for 2 and 3.
Regards,
Hardik Shah
10-20-2014 07:55 AM
lewiscc@PAN1-5020(active)> show counter global filter delta yes | match url
flow_url_category_msg_recv 3406 6 info flow pktproc Flow msg: url category messages received
ctd_handle_reset_and_url_exit 217 0 info ctd pktproc Handle reset and url exit
ctd_url_block 12900 26 info ctd pktproc sessions blocked by url filtering
ctd_url_block_cont 2031 3 info ctd pktproc sessions prompted with block/cont for url filtering
log_url_cnt 487575 1009 info log system Number of url logs
log_url_req_cnt 10217 20 info log system Number of url request logs
proxy_url_request_timeout 127 0 warn proxy pktproc The url category request for ssl proxy is timedout
proxy_url_request_pkt_drop 610 1 drop proxy pktproc The number of packets get dropped because of waiting for url category request in ssl proxy
proxy_url_category_unknown 62 0 info proxy pktproc Number of sessions checked by proxy with unknown url category
url_db_request 10217 20 info url pktproc Number of URL database request
url_db_reply 15010 30 info url pktproc Number of URL reply
url_flow_state_invalid 887 1 info url session The session's state was changed when receiving url response
url_flow_unmatched 74 0 warn url pktproc The url response does match flow key
url_request_timeout 2059 4 info url pktproc The url category request is timedout
url_request_pkt_drop 23631 48 drop url pktproc The number of packets get dropped because of waiting for url category request
url_session_not_in_wait 1585 2 error url system The session is not waiting for url
url_session_not_in_ssl_wait 130 0 error url system The session is not waiting for url in ssl proxy
url_reply_not_distribute 4 0 warn url pktproc Number of URL reply not distributed to other DPs
lewiscc@PAN1-5020(active)> show counter global filter delta yes | match url
flow_url_category_msg_recv 739 20 info flow pktproc Flow msg: url category messages received
ctd_handle_reset_and_url_exit 53 0 info ctd pktproc Handle reset and url exit
ctd_url_block 999 27 info ctd pktproc sessions blocked by url filtering
ctd_url_block_cont 190 4 info ctd pktproc sessions prompted with block/cont for url filtering
log_url_cnt 36049 1018 info log system Number of url logs
log_url_req_cnt 869 24 info log system Number of url request logs
proxy_url_request_timeout 5 0 warn proxy pktproc The url category request for ssl proxy is timedout
proxy_url_request_pkt_drop 30 0 drop proxy pktproc The number of packets get dropped because of waiting for url category request in ssl proxy
proxy_url_category_unknown 6 0 info proxy pktproc Number of sessions checked by proxy with unknown url category
url_db_request 869 24 info url pktproc Number of URL database request
url_db_reply 3076 86 info url pktproc Number of URL reply
url_flow_state_invalid 160 4 info url session The session's state was changed when receiving url response
url_flow_unmatched 6 0 warn url pktproc The url response does match flow key
url_request_timeout 167 4 info url pktproc The url category request is timedout
url_request_pkt_drop 2295 64 drop url pktproc The number of packets get dropped because of waiting for url category request
url_session_not_in_wait 399 11 error url system The session is not waiting for url
url_session_not_in_ssl_wait 18 0 error url system The session is not waiting for url in ssl proxy
lewiscc@PAN1-5020(active)> show counter global filter delta yes | match url
flow_url_category_msg_recv 42 0 info flow pktproc Flow msg: url category messages received
ctd_handle_reset_and_url_exit 27 0 info ctd pktproc Handle reset and url exit
ctd_url_block 892 30 info ctd pktproc sessions blocked by url filtering
ctd_url_block_cont 99 2 info ctd pktproc sessions prompted with block/cont for url filtering
log_url_cnt 29500 1044 info log system Number of url logs
log_url_req_cnt 616 21 info log system Number of url request logs
proxy_url_request_timeout 21 0 warn proxy pktproc The url category request for ssl proxy is timedout
proxy_url_request_pkt_drop 81 2 drop proxy pktproc The number of packets get dropped because of waiting for url category request in ssl proxy
proxy_url_category_unknown 3 0 info proxy pktproc Number of sessions checked by proxy with unknown url category
url_db_request 616 21 info url pktproc Number of URL database request
url_db_reply 244 8 info url pktproc Number of URL reply
url_request_timeout 65 2 info url pktproc The url category request is timedout
url_request_pkt_drop 721 24 drop url pktproc The number of packets get dropped because of waiting for url category request
url_session_not_in_wait 22 0 error url system The session is not waiting for url
url_session_not_in_ssl_wait 7 0 error url system The session is not waiting for url in ssl proxy
10-20-2014 08:11 AM
Hi Lewis,
I can see following counter in each iteration of output.
url_request_pkt_drop 2295 64 drop url pktproc The number of packets get dropped because of waiting for url category request
It means "number of packets dropped while waiting for response from mp."
Please provide us output for " show system resources | match srvr"
Regards,
Hardik Shah
10-20-2014 08:25 AM
output from show system resources | match srv
2177 20 0 681m 466m 11m S 97 11.8 6212:03 devsrvr
2151 20 0 1128m 815m 7060 S 63 20.6 19252:08 mgmtsrvr
going to restart the mgmt server (debug software restart management-server)
10-20-2014 08:48 AM
Hi Lewis,
That is the right idea. Right now management server is running above 1 Gb.
2151 20 0 1128m 815m 7060 S 63 20.6 19252:08 mgmtsrvr >>>>>>>>>>>>>>1128m =1.2 Gb
Please restart it with command "debug software restart management-server"
Regards,
Hardik Shah
10-20-2014 08:57 AM
Since I have run the restart management command the 'not-resolved' issue no longer exist. Investigating with support now if it is a memory leak. Thanks for everyones input.
10-20-2014 10:13 AM
Hi Lewis,
Right now PANW is on 5.0.10 which is much older release. Latest release is 5.0.14 and soon they will release 5.0.15.
Even if they find any potential bug in 5.0.10, the recommendation would be to upgrade to latest release.
Right now recommended release is 5.0.13.
Let me know if you have any other thoughts.
Regards,
Hardik Shah.
10-20-2014 10:33 AM
We are scheduled to go to 6.0.5 in the next couple weeks. So hopefully this issue will no longer exist. thanks for your input.
10-20-2014 10:37 AM
Hi Lewis,
5.0.9 had similar issue reported, which was closed due to inactivity. Hence there is no bug in 5.0.10. which matches exact symptoms.
I would suggest to upgrade to 5.0.13. Please find the release note for the same.
Regards,
Hardik Shah
10-20-2014 10:40 AM
Hi Lewis,
6.0.5 is also one of the recommended release, I suggested 5.0.13 because its on the same trail as 5.0.6.
Let me know if it helps and feel free to ask any future queries.
Regards,
Hardik Shah
10-29-2014 11:47 AM
we resolved the same issue using command
debug software restart device-server
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!