Problems with CentOs 7 and MM 0.9.52

Reply
L4 Transporter

Problems with CentOs 7 and MM 0.9.52

 

Hi guys,

 

I used to run standalone MM 0.9.50 with CentOS 7, perfectly. Last week I updated MM to 0.9.52 with the help of @lmori and the proccess was completed with success. See ( https://live.paloaltonetworks.com/t5/MineMeld-Discussions/Updating-MineMeld-from-0-9-50-to-the-lates... ).

 

However since the upload my MM doesn't work the same way. On my Dashboard is visible that my miner works fine, more than 90K indicators, but almost none of them ara available, less than 1K in the outputs (see figure below).

 

Captura_Minemeld_0_9_52_Dashboard.PNG

 

If we detailed the proccess, we see that the status of many nodes is "stopped". The number of indicators forwarded by the miners is high.

 

Captura_Minemeld_0_9_52_Nodes.PNG

 

 

The number of indicators forwarded by the aggregators is almost the same.

 

Captura_Minemeld_0_9_52_Nodes2.PNG

 

But the number of indicators available by the outputs is extrmely low.

 

Captura_Minemeld_0_9_52_Nodes3.PNG

 

Has anybody experienced something similar? How you dealed with the problem?

 

Best Regards.

L7 Applicator

Re: Problems with CentOs 7 and MM 0.9.52

Hi @danilo.souza,

could you try this:

  • in /opt/minemeld/engine/core/minemeld/run open the file launcher.py
  • at line 284 change
mbusmaster.wait_for_chassis(timeout=10)

into this

mbusmaster.wait_for_chassis(timeout=60)

and restart

L4 Transporter

Re: Problems with CentOs 7 and MM 0.9.52

Hi @lmori

 

first I restarted just the engine through the web interface, but not changed. Then I restarted my server and I got a bit more indicators.

 

 

Captura_Minemeld_0_9_52_Dashboard2.PNG

 

 

I still have many nodes with the staus "stopped". And the number of the indicators in the output isn't matching yet.

 

What is this parameter?

 

mbusmaster.wait_for_chassis

 

Can it be changed even higher? What is the consequence?

 

L4 Transporter

Re: Problems with CentOs 7 and MM 0.9.52

Hi @lmori I still have nodes with the staus "stopped". And the number of the indicators in the output isn't matching yet. Did you have the opportunity to see my last reply? Best regards.
L4 Transporter

Re: Problems with CentOs 7 and MM 0.9.52

Hi @lmori

 

I'm sorry to insist. I still have nodes with the staus "stopped". I tryed to find any other evidences of the problem, but I got nothing. Unhappy, this is happening since I upgraded to 0.9.52.

 

Did you have the opportunity to see my last reply (https://live.paloaltonetworks.com/t5/MineMeld-Discussions/Problems-with-CentOs-7-and-MM-0-9-52/m-p/2...)?

 

Best regards.

L2 Linker

Re: Problems with CentOs 7 and MM 0.9.52

Our server team just upgraded my Minemeld to CentOS7  0.9.52 and it only lists about 29k indicators but only 43 total in the output.  I have an Ubuntu 14.04 image running 0.9.52 and it has 231k indicators and 77k output.

 

We have never had CentOS running yet but that is our PROD operating system and they want us to use it.  Right now I just have my test box feeding the PAN the EDL.

 

I tried bumping up to 60 on the timeout.

L4 Transporter

Re: Problems with CentOs 7 and MM 0.9.52

Hi @xhoms and @lmori

 

Apparently there is a problem with the version 0.9.52 and the CentOS. Do you have any other known issue? Is it being addressed? It is impacting our environment. Do you advise rolling back to 0.9.50 version? If yes, How to do it?

 

Best regards.

Highlighted
L7 Applicator

Re: Problems with CentOs 7 and MM 0.9.52

Hi @danilo.souza,

yes, I am looking into this. The problem seems to be related to RabbitMQ.

If you want to install the previous version, you can use the update procedure for Ansible but in the file change the line 7 to look like roles/minemeld/tasks/core.yml:

version: "0.9.50.post4"
L2 Linker

Re: Problems with CentOs 7 and MM 0.9.52

That doesn't work either.

 
TASK [minemeld : minemeld-core repo] ********************************************************************************************************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"before": "bdd6879cc2c72094702d301e3e1bae4a198eed5f", "changed": false, "msg": "Local modifications exist in repository (force=no)."}
        to retry, use: --limit @/minemeld-ansible/local.retry
 
 
I have tried setting things to force  yes but it still fails.
L4 Transporter

Re: Problems with CentOs 7 and MM 0.9.52

Hi @StephenBradley and @lmori

 

I can confirm it is extremely unstable. I tried rolling back to the previous version using the tips of @lmori in the previous post. At first it failed and present a fatal error. I enabled the extensions the error desapeared, but I'm still getting the "stopped" status for some nodes. The curious point is that, after the rolling back, I'm still getting the 0.9.52 vrsion in MM WEBGUI.

 

Captura_MM_version.PNGMM Version

 

Did you experienced something similar @StephenBradley ?

Best regards.

 

 

Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the Live Community as a whole!

The Live Community thanks you for your participation!