Anyone else getting:
Error: Error reading tom data
failed to handle CONFIG_UPDATE_START
Issue been logged with Palo for a month now and still not solved
I was running 8.0.3 when it happen and rolled back to 8.0.1 and still have the issue
The number custom urls is limited by the config memory available.
This can be very different depending on your platform.
You can verify the config memory usage with this command :
debug dataplane show cfg-memstat statistics
I'm not saying you're hitting this exact issue but I've seen this when memory allocation errors occur during commit (seen in devsrv.log or ms.log).
@kiwi I get this output from that command
Current config memory usage
Misc : 2688 KB (Actual 2519 KB)
Custom URL : 12800 KB (Actual 12714 KB)
Global : 41856 KB (Actual 39320 KB)
vsys1 : 512 KB (Actual 289 KB)
Last config memory usage
Misc : 256 KB (Actual 138 KB)
Custom URL : 1024 KB (Actual 0 KB)
Global : 42112 KB (Actual 39468 KB)
I assume that means 0 memory available then?
I know this is almost a year later but what was the fix. I assume you were running at 100% Config Allocator Usage. We have the same thing going on right now and suspect that we need to greatly reduce the number of EDL, custom URLs, custom objects, etc.
@rgarner Yes it was. We had to wait for a fix from palo which took months. The fixed it in release 8.0.7.
To get up a running we wiped our backup 3020 as we was desperate and didnt set any EDL for DNS sinkholes etc and only set custom URL whitelist so we wouldnt fill the cache and made that the primary. We disabled all the ports on the network apart from the mgt to stop spilt brain happening and waited for a fix from palo.
I logged it with palo who took copies of our config and managed to make it happen in there lab but it took months before a fix was made.
let me know if you need any more help / advice
Glad you got it resolved. Unfortunately, we are already on 8.0.7. PAN is handcuffed because we can't upload any configs and only segments of logs (after they have been sanitized). So, I suppose we will have to greatly reduce our EDLs and URL and see what that does.
We're running into this issue on 8.1.13. We had been using a Covid-19 EDL that mined all the new domains being bought up and sitting idle until the time is right to start using them for malware delivery. We have an 820, so the list capacity is currently 50,000 URL's. Well, the domain list is already at 50k+ so our Dataplane memory is at 100%, and we can't commit. I'm working with support, and I even truncated the list to 1/2, and I still can't get it to commit.
I have a move to 9.0.7 on my radar, and that will give us 100k URL support, so maybe I'll pull the plug on that. It seems that the 9.x code should be stable by now. We're active/passive so I am going to see how our passive handles the 9.0.7 version. There is a known issue with UserID not working properly until *both* firewalls have been updated, so that is causing me a little bit of worry, since we can't quickly test things until both FW's have been upgraded, so that makes for a longer possible maintenance window.
When I set this EDL up a month ago, it was at about 13k domains - it's scaling up almost as fast as the medical cases!
Maybe I'll put in a request with our SE to have PAN consider a special URL category for these domains.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!
The Live Community thanks you for your participation!