Enhanced Security Measures in Place:   To ensure a safer experience, we’ve implemented additional, temporary security measures for all users.

PA-5020 reboots to maintnance mode. No hardware problems... svc: failed to register lockdv1 RPC service (errno 97)

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements

PA-5020 reboots to maintnance mode. No hardware problems... svc: failed to register lockdv1 RPC service (errno 97)

L1 Bithead

Hello, All!

After unpacking and installing PA5020 in HA Active/Active we could not set up a basic nat - in dataplane packet dump there was an error smth like "Cant create session" after nat and sec.policy applied. NAT and security rules were fine.

After rebooting one node it went to countinuos rebooting: after exit maint mode it starts, prompts for login and then starting to shut down...

In the messages log there is nothing about harware (disk, nic) errors but only one mystireous line "svc: failed to register lockdv1 RPC service (errno 97)"

Sounds like its a unsuccessful NFS mount without interface/address being active. Node was reset to factory defaults  but reboots are still in place...

Is there way to debug this issue? What files are need to be expected for errors (there are a LOT of then in the maint. menu)

2 accepted solutions

Accepted Solutions

MZRF,

When a device reboots to Maint Mode it is a good indication of a HW issue.  If you can access Maint Mode I would suggest to try FSCK for all partitions to try to recover.  I've seen other cases with this error and the results was that the SDD was RMAed.  If you are unable to recover with FSCK, you will need to contact Support to open a case.

Regards,

Gerard

View solution in original post

L1 Bithead

System boots normally on both of disks but when it inserted standalone without partner.

Perhaps RAID|sys bug...

PS: Problem resolved after SSD's RMA.

View solution in original post

5 REPLIES 5

L1 Bithead

Factory default and reimage hadn't helped...

It seems to be a segfault in 'cryptod' module and reboot caused by watchdog....Perhaps failure in some crypto offload asic or certificate issue. Sig 11 is a attempt to access protected or unavailable memory. Here is coredump info:
--------------------------------------------------------------------------------
Platform Info
--------------------------------------------------------------------------------
cfg.panos.version: 4.1.6
cfg.platform.version: 3.0
cfg.panos.buildinfo: { 'branch': release/edinburgh, 'changenum': 159517, }
cfg.platform.dp-slots: 1
cfg.platform.family: 5000
cfg.platform.mac: 00:1b:17:c9:f7:00
cfg.platform.mac-count: 254
cfg.platform.model: PA-5020
cfg.platform.mp-slot: 0
cfg.platform.portcount: 20
cfg.platform.serial: serial_number_goes_here
cfg.platform.version: 3.0
cfg.platform.vpn-disable: False
cfg.net.eth0.cfg: { 'fips-gated': True, 'ipaddr': 192.168.1.1, 'ipv6addr': , 'ne
tmask': 255.255.255.0, 'onboot': True, 'routes': { 'default': , }, 'up': True, '
v6routes': { 'default': , }, }

--------------------------------------------------------------------------------
Backtrace
--------------------------------------------------------------------------------
[New Thread 1875]
[New Thread 1880]
Core was generated by `/usr/local/bin/cryptod'.
Program terminated with signal 11, Segmentation fault.
#0  0x00000034 in ?? ()

Thread 3 (Thread 1880):
#0  0xb7756063 in __call_pselect6 () from /lib/libc.so.6
#1  0xb774edd6 in pselect () from /lib/libc.so.6
#2  0xb75222ff in __evGetNext (opaqueCtx=..., opaqueEv=0xb55d1378, options=2)
    at eventlib.c:328
#3  0xb7522b44 in __evMainLoop (opaqueCtx=...) at eventlib.c:654
#4  0xb75d4ab0 in sysd_pthread_evloop (arg=0x8075b00) at sysd_pthread.c:338
#5  0xb65fb6c1 in start_thread () from /lib/libpthread.so.0
#6  0xb55d1470 in ?? ()
#7  0xb55d1470 in ?? ()
#8  0xb55d1470 in ?? ()
#9  0xb55d1470 in ?? ()
#10 0x00000000 in ?? ()

Thread 2 (Thread 1875):
#0  0xb65fc8a2 in pthread_join () from /lib/libpthread.so.0
#1  0xb75d50b0 in sysd_pthread_destruct (s=0x8075b00) at sysd_pthread.c:360
#2  0xb75c4f3f in sysd_destruct (s=0x8075b00) at sysd.c:225
#3  0x08056fb9 in pan_cryptod_sysd_fini (cryptod=0x8061df0)
    at pan_cryptod_sysd.c:186
#4  0x0804ac84 in pan_cryptod_destruct (cryptod=0x8061df0) at pan_cryptod.c:96
#5  0x08054f82 in pan_cryptod_exit (argc=) at pan_cryptod_main.c:79
#6  main (argc=) at pan_cryptod_main.c:373

Thread 1 (Thread 1879):
#0  0x00000034 in ?? ()
#1  0xb75cb998 in sysd_next_tid (s=0xb4c00dc0) at sysd_msgs.c:27
#2  0xb75d6b31 in sysd_sync_modify (s=0xb4c00dc0, obj=0xb4c00fa0,
    cb=0xb75c7c20 <modify_obj_cb>, arg=0x0, flags=1, tmo=29806)
    at sysd_sync.c:114
#3  0xb75c876a in sysd_modify_obj (s=0xb4c00dc0, fmt=0xb6d1e660 "b")
    at sysd_importer.c:250
#4  0xb693621d in pan_cfgagent_write_sysd_boolean_sync (sysd=0xb4c00dc0,
    spec=0xb5dd0f8c "sw.mgmt.runtime.clients.(null).register", value=1)
    at pan_cfgagent.c:123
#5  0xb69362fe in pan_cfgagent_register (agent=0xb4c00da0)
    at pan_cfgagent.c:890
#6  0xb69379f8 in pan_cfgagent_enable (agent=0xb4c00da0) at pan_cfgagent.c:909
#7  0x0804b6b0 in pan_cryptod_cfgagent_init (cryptod=0x8061df0)
    at pan_cryptod_cfg.c:806
#8  pan_cryptod_mgmt_start (cryptod=0x8061df0) at pan_cryptod_cfg.c:821
#9  0x0804aaff in pan_cryptod_sysd_connected (cryptod=0xb5dd0d00)
    at pan_cryptod.c:138
#10 0x08057739 in pan_cryptod_sysd_event_callback (event=0xb5dd1274)
    at pan_cryptod_sysd.c:221
#11 0xb75c7628 in sysd_ev_dispatch (el=0x80766e8, e=0xb5dd1274)
    at sysd_event.c:51
#12 0xb75c4569 in sysd_async_process_response_message (s=0x8075b00,
    mb=0x807c078) at sysd_async.c:453
#13 sysd_async_recv (s=0x8075b00, mb=0x807c078) at sysd_async.c:620
#14 0xb75d4fa4 in pthread_async_recv (s=0x8075b00, arg=0x807c078)
    at sysd_pthread.c:291
#15 0xb75d4d4b in sysd_pthread_worker (warg=0x8075b00) at sysd_pthread.c:279
#16 0xb65fb6c1 in start_thread () from /lib/libpthread.so.0
#17 0xb5dd1470 in ?? ()
#18 0xb5dd1470 in ?? ()
#19 0xb5dd1470 in ?? ()
#20 0xb5dd1470 in ?? ()
#21 0x00000000 in ?? ()

---Locals---
No symbol table info available.

---Registers---
eax            0xb5dd0d00       -1243804416
ecx            0xb4c00fa0       -1262481504
edx            0xb4c00dc0       -1262481984
ebx            0xb765fa00       -1218053632
esp            0xb5dd0c9c       0xb5dd0c9c
ebp            0xb5dd0cb8       0xb5dd0cb8
esi            0xb4c00dc0       -1262481984
edi            0xb4c00e20       -1262481888
eip            0x34     0x34
eflags         0x10286  [ PF SF IF RF ]
cs             0x73     115
ss             0x7b     123
ds             0x7b     123
es             0x7b     123
fs             0x0      0
gs             0x33     51
(END)

MZRF,

When a device reboots to Maint Mode it is a good indication of a HW issue.  If you can access Maint Mode I would suggest to try FSCK for all partitions to try to recover.  I've seen other cases with this error and the results was that the SDD was RMAed.  If you are unable to recover with FSCK, you will need to contact Support to open a case.

Regards,

Gerard

Thank you!

In 'Maint. mode' there are no reasons listed. All partitions are 'clear'... fsck shows no problem..

We have opened case in PA partner support system. Today we'll have group support session to device's serial port Smiley Happy

Will try to keep this post updated.

You was right.

RMA of both SSDs initiated after support engineer investigation.

L1 Bithead

System boots normally on both of disks but when it inserted standalone without partner.

Perhaps RAID|sys bug...

PS: Problem resolved after SSD's RMA.

  • 2 accepted solutions
  • 4582 Views
  • 5 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!