<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Best way of restricting web access? in General Topics</title>
    <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8948#M6541</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Filtering on dstip just wont work because Google uses shitloads of ip addresses.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Setup a custom url category which contains (for example) www.google.com (you might need to add a few other domains/subdomains/urls aswell) and enable ssl termination (so you can inspect encrypted https flows).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1) Then as first rule you deny stuff from Google which you dont wish to allow, such as Google Translate etc which can be used as proxy.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2) Then as second rule you allow by use the above custom url category along with appid for google search (if such exists).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You could also create a custom appid for this. Dont forget to set service to application-default (or manually to TCP80, TCP443).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 20 Aug 2012 23:38:35 GMT</pubDate>
    <dc:creator>mikand</dc:creator>
    <dc:date>2012-08-20T23:38:35Z</dc:date>
    <item>
      <title>Best way of restricting web access?</title>
      <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8947#M6540</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi there,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Have a "interesting" problem. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Scope&lt;/P&gt;&lt;P&gt;* Clients are not to be allowed access to the internet. Restrict and control with firewall.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Scope creep&lt;/P&gt;&lt;P&gt;*Clients need access to Google to do a search, click on any links in that search. They will search for people/location&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How can I best isolate and protect these clients web access? I've been trying to figure out what would be the best possible way to allow them a Google search access and 'click' access to the search that comes up.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any input would be greatly appreciated and anything goes to try and mitigate the scope creep to try and be somewhat 'in line' with scope!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Cheers&lt;/P&gt;&lt;P&gt;A.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 20 Aug 2012 15:01:08 GMT</pubDate>
      <guid>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8947#M6540</guid>
      <dc:creator>Ante</dc:creator>
      <dc:date>2012-08-20T15:01:08Z</dc:date>
    </item>
    <item>
      <title>Re: Best way of restricting web access?</title>
      <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8948#M6541</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Filtering on dstip just wont work because Google uses shitloads of ip addresses.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Setup a custom url category which contains (for example) www.google.com (you might need to add a few other domains/subdomains/urls aswell) and enable ssl termination (so you can inspect encrypted https flows).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1) Then as first rule you deny stuff from Google which you dont wish to allow, such as Google Translate etc which can be used as proxy.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;2) Then as second rule you allow by use the above custom url category along with appid for google search (if such exists).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You could also create a custom appid for this. Dont forget to set service to application-default (or manually to TCP80, TCP443).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 20 Aug 2012 23:38:35 GMT</pubDate>
      <guid>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8948#M6541</guid>
      <dc:creator>mikand</dc:creator>
      <dc:date>2012-08-20T23:38:35Z</dc:date>
    </item>
    <item>
      <title>Re: Best way of restricting web access?</title>
      <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8949#M6542</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;BR /&gt;Mmm how do I deal with that the user has to click on the actual search links that comes up? That means internet, I'm wondering if I could allow the actual search itself and then allow the "click" to any of the links, they'll have some 'google' link in them when clicked at first. (creating a new APP ID) then have an ultra restricitive web-browsing and not allow any files up or down. Has anyone tried to design something like this in the past?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 21 Aug 2012 07:26:43 GMT</pubDate>
      <guid>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8949#M6542</guid>
      <dc:creator>Ante</dc:creator>
      <dc:date>2012-08-21T07:26:43Z</dc:date>
    </item>
    <item>
      <title>Re: Best way of restricting web access?</title>
      <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8950#M6543</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Could using google webcache be sufficient for the users?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Because if you want them to be able to traverse to the searchresult then you must enable surfing.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;You could perhaps setup a custom appid that will trigger on http-method: GET, HEAD, POST along with http-referer: ^&lt;/SPAN&gt;&lt;A class="jive-link-external-small" href="http://www.google.com/"&gt;http://www.google.com/&lt;/A&gt;&lt;SPAN&gt; (or whatever domain google uses in your case). This appid could of course be easily bypassed by the client by sending a custom referer to fool the filter but still...&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 21 Aug 2012 07:45:39 GMT</pubDate>
      <guid>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8950#M6543</guid>
      <dc:creator>mikand</dc:creator>
      <dc:date>2012-08-21T07:45:39Z</dc:date>
    </item>
    <item>
      <title>Re: Best way of restricting web access?</title>
      <link>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8951#M6544</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Will look into how to best use the appid, they'll click pre defined links so could use some filtering on that, they're happy to use cached web after their google search which means they can still 'surf' the internet but it'll be pretty cumbersome to do for them and it'll more then likely result in them not doing it on these machines. I'll update this thread with the final result.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers and thanks&lt;/P&gt;&lt;P&gt;A&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 31 Aug 2012 07:25:56 GMT</pubDate>
      <guid>https://live.paloaltonetworks.com/t5/general-topics/best-way-of-restricting-web-access/m-p/8951#M6544</guid>
      <dc:creator>Ante</dc:creator>
      <dc:date>2012-08-31T07:25:56Z</dc:date>
    </item>
  </channel>
</rss>

