Command line "show session all" limited to 1024 entries

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
Please sign in to see details of an important advisory in our Customer Advisories area.

Command line "show session all" limited to 1024 entries

L1 Bithead

First some information on the use case:

  • 500 users
  • each user is generating approximately 10 simultaneous sessions => 5000 simultaneous sessions

I would like to get the amount of current sessions per user, from the command line.

 

I currently use the API to basically do:

for user in $user_list; do
  panxapi.py -jo " <show><session><all><filter><source-user>$user</source-user></filter></all></session></show>"
done

However, the above for loop takes a large amount of time (approx 50 seconds for 500 users) and is stressing the Palo Alto quite a lot.

 

I also tried the following:

for i in 1 1025 2049 3073 4097; do
   panxapi.py -jo "<show><session><all><start-at>$i</start-at></all></session></show>"
done

But I am not sure that this gives a consistent list of sessions.

A better understanding of how the "start-at" filter works would help me evaluate the consistency of the above loop:

  • can I miss some sessions?
  • can I get the same session returned twice between 2 panxapi runs?
  • or maybe there's a better way to grab those sessions that I don't see?

 

 

1 accepted solution

Accepted Solutions

In that case, I don't know of anything that can get that granular. Even if you could get all the sessions for each user in a fairly short amount of time (a couple minutes, for example) it wouldn't be accurate by the time you were done. Especially if you have several users streaming HD video or downloading large files.

 

Reporting would probably get you a better sense of what's going on without the difficulties of getting a real-time snapshot.

 

If you still want that data, you may want to filter it further than just all sessions, maybe just get the sessions larger than x-bytes so you don't see all the tiny sessions you don't really care about. Think about how a typical web session will generate many DNS queries that are individual sessions but typically only 2 packets with almost no bytes. Use the "min-kb" filter to get the filtered data with more relevant info.

View solution in original post

6 REPLIES 6

L7 Applicator

Since you seem to be just looking for the count instead of details, as you iterate through your user list, add a "count yes" to your request. 

 

show session all filter source-user domain\username count yes

 

or in XML format (you'll want to test this, I didn't):

 

<show><session><all><filter><source-user>$user</source-user><count>yes</count></filter></all></session></show>

 

You won't get any details at all about the types of sessions, packets, apps, etc. but you will get the raw count data.

Thanks Gwesson,

Sorry I wasn't clear enough when I said "amount of current sessions per user"

I don't want the session count, I want the total amount of bytes.

In that case, I don't know of anything that can get that granular. Even if you could get all the sessions for each user in a fairly short amount of time (a couple minutes, for example) it wouldn't be accurate by the time you were done. Especially if you have several users streaming HD video or downloading large files.

 

Reporting would probably get you a better sense of what's going on without the difficulties of getting a real-time snapshot.

 

If you still want that data, you may want to filter it further than just all sessions, maybe just get the sessions larger than x-bytes so you don't see all the tiny sessions you don't really care about. Think about how a typical web session will generate many DNS queries that are individual sessions but typically only 2 packets with almost no bytes. Use the "min-kb" filter to get the filtered data with more relevant info.

Thanks for your tips @gwesson.

 

Undeed the min-kb is a good idea! The best lead we have so far: other filters would not be filtering enough (although we might combine the min-kb with other filters, for further filtering) 

 

However, even with a reasonably high min-kb value (say 50Kb), our projections show that we will still be above the 1024 limitation, so we may still need to run a few "start-at" commands and aggregate those, hence the consistency issue.

 

 

In the end, consistency might not be a huge issue, running a few sequential "start-at" commands may not give a lot of "intersections" or "holes", and we can work with some error margin.

A filter that would also be greatly filtering would be some way to only get user-tagged traffic. Some things I tried:

 

1. filter on subnet (in our case, user tagged traffic appear on different subnets than non user-tagged):

show session all filter source 10.10.10.0/24

2. filter on traffic user-tagged:

show session all filter source-user *
show session all filter source-user any

 

Unfortunately, the above commands don't work.

I don't think you'll be able to filter on a source-user like that. The simple reason being it doesn't get logged as a 'known-user' it get's logged as the user that went there. When you utilize 'known-user' or 'unknown-user' in a security policy the check is simply 'does this IP have a user-id listed'. If it returns any user the 'known-user' rule will function, if it returns unknown then the unknown-user rule will trigger. When looking at logs you really won't be able to filter like that, and even if you could the CPU hit of doing so would cause the same issues that you are running into now.
  • 1 accepted solution
  • 5061 Views
  • 6 replies
  • 0 Likes
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!