- Access exclusive content
- Connect with peers
- Share your expertise
- Find support resources
10-14-2019 02:19 AM - edited 10-14-2019 02:21 AM
Hi Team,
I'm trying to onboard one of our firewals to the Cortex Data Lake. It has the logging service license and when I put in the onboarding PSK and click status, I get:
Failed to fetch ingest/query FQDN for customer (curl failed)
I've tried
- Giving the firewall an NTP server
- Removing and re-adding the license
- Manually fetching the certificate (it fails)
> request logging-service-forwarding certificate fetch
Successfully scheduled logging service certificate fetch job with a job id of 2132
> show jobs id 2132
Enqueued Dequeued ID Type Status Result Completed
------------------------------------------------------------------------------------------------------------------------------
2019/10/14 10:18:24 10:18:24 2132 LCaaS-certificate-fetch FIN FAIL 100 %
Warnings:
Details:
- Manually fetching the customer info (it fails)
> request logging-service-forwarding customerinfo fetch
Successfully fetch Logging Service region info
> request logging-service-forwarding customerinfo show
Server error : Unable to read the LCaaS customer information. Please re-fetch region info
11-11-2019 06:26 AM
@LukeBullimore: Yes, TAC case already closed. If you are not on 9.0.4 or 8.1.13 (not available yet), you need to contact TAC to set the tenant-id for cortex/logging service.
Problem is fixed in the versions mentioned above - no workarounds available, so TAC creates root access to device and edits local files on each firewall
11-01-2019 07:27 AM
Hi - I'm experiencing exactly the same error and opened a TAC case for this.
Do you got any feedback on your situation?
11-05-2019 04:41 PM
Same here.. Guess I'll be opening one as well.
11-11-2019 06:26 AM
@LukeBullimore: Yes, TAC case already closed. If you are not on 9.0.4 or 8.1.13 (not available yet), you need to contact TAC to set the tenant-id for cortex/logging service.
Problem is fixed in the versions mentioned above - no workarounds available, so TAC creates root access to device and edits local files on each firewall
11-11-2019 07:04 AM
Yep, did that on Friday. Easy fix for TAC with root.
[root@XXXXXXXXX ~]# sdb -i cfg.saas.custid
cfg.saas.custid: string (10 bytes)
[root@XXXXXXXXX ~]# sdb cfg.saas.custid=None
cfg.saas.custid: None
[root@XXXXXXXXX ~]# sdb cfg.saas.custid=123456789
cfg.saas.custid: 123456789
[root@XXXXXXXXX ~]# sdb -i cfg.saas.custid
cfg.saas.custid: int (len 4)
re-license FW's from Panorama
Finally, on each fw:
request logging-service-forwarding certificate delete
request logging-service-forwarding certificate fetch
06-11-2021 05:50 AM
Hi All,
I have seen this issue with other customers and I was able solve it with following steps :
Now your Firewall is onboarded with data lake
PS: in an active-passive cluster you only need to this on one of both firewalls, all the rest will be synced over the HA.
06-30-2021 07:25 AM
having the same issues here. we have 22 pa-220's sending to cortex data late and iot security(zingbox) using 9.1.6 and this has been an abysmal experience to say the least.
had a tac case open. after fighting with it for almost a month, they finally went into root and reset the custid from there, then relicense the firewall, then delete/re-add the certificate. seems that doing the last two steps without tac resetting the custid doesn't always help as the cert won't pull in. I just brought up another box and it's having the same issue as a lot of the other ones I had. Beware, this is not a seemless setup, very cumbersome and quite frankly makes me hate palo a little bit for their poor implementation of this. I'm opening up yet another case to get this one fixed.
Click Accept as Solution to acknowledge that the answer to your question has been provided.
The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!
These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!
The LIVEcommunity thanks you for your participation!