Problems with CentOs 7 and MM 0.9.52

Showing results for 
Show  only  | Search instead for 
Did you mean: 
Please sign in to see details of an important advisory in our Customer Advisories area.

Problems with CentOs 7 and MM 0.9.52

L4 Transporter


Hi guys,


I used to run standalone MM 0.9.50 with CentOS 7, perfectly. Last week I updated MM to 0.9.52 with the help of @lmori and the proccess was completed with success. See ( ).


However since the upload my MM doesn't work the same way. On my Dashboard is visible that my miner works fine, more than 90K indicators, but almost none of them ara available, less than 1K in the outputs (see figure below).




If we detailed the proccess, we see that the status of many nodes is "stopped". The number of indicators forwarded by the miners is high.





The number of indicators forwarded by the aggregators is almost the same.




But the number of indicators available by the outputs is extrmely low.




Has anybody experienced something similar? How you dealed with the problem?


Best Regards.

1 accepted solution

Accepted Solutions

Hi guys


this problem started when I was trying to update to 0.9.52 version, but took so long that I finally completed the process with 0.9.60 version. To solve the problem with 0.9.60 version, you should execute the folowing, after the known basic steps:


ln -s /usr/lib64/python2.7/lib-dynload/ /usr/local/lib/python2.7/lib-dynload/


This solve the problem of getting "Bad Gateway" message in MM WebGUI.

Best regards

View solution in original post


L7 Applicator

Hi @danilo.souza,

could you try this:

  • in /opt/minemeld/engine/core/minemeld/run open the file
  • at line 284 change

into this


and restart

Hi @lmori


first I restarted just the engine through the web interface, but not changed. Then I restarted my server and I got a bit more indicators.






I still have many nodes with the staus "stopped". And the number of the indicators in the output isn't matching yet.


What is this parameter?




Can it be changed even higher? What is the consequence?


Hi @lmori I still have nodes with the staus "stopped". And the number of the indicators in the output isn't matching yet. Did you have the opportunity to see my last reply? Best regards.

Hi @lmori


I'm sorry to insist. I still have nodes with the staus "stopped". I tryed to find any other evidences of the problem, but I got nothing. Unhappy, this is happening since I upgraded to 0.9.52.


Did you have the opportunity to see my last reply (


Best regards.

L2 Linker

Our server team just upgraded my Minemeld to CentOS7  0.9.52 and it only lists about 29k indicators but only 43 total in the output.  I have an Ubuntu 14.04 image running 0.9.52 and it has 231k indicators and 77k output.


We have never had CentOS running yet but that is our PROD operating system and they want us to use it.  Right now I just have my test box feeding the PAN the EDL.


I tried bumping up to 60 on the timeout.

L4 Transporter

Hi @xhoms and @lmori


Apparently there is a problem with the version 0.9.52 and the CentOS. Do you have any other known issue? Is it being addressed? It is impacting our environment. Do you advise rolling back to 0.9.50 version? If yes, How to do it?


Best regards.

Hi @danilo.souza,

yes, I am looking into this. The problem seems to be related to RabbitMQ.

If you want to install the previous version, you can use the update procedure for Ansible but in the file change the line 7 to look like roles/minemeld/tasks/core.yml:

version: "0.9.50.post4"

That doesn't work either.

TASK [minemeld : minemeld-core repo] ********************************************************************************************************************************************************************************************************
fatal: []: FAILED! => {"before": "bdd6879cc2c72094702d301e3e1bae4a198eed5f", "changed": false, "msg": "Local modifications exist in repository (force=no)."}
        to retry, use: --limit @/minemeld-ansible/local.retry
I have tried setting things to force  yes but it still fails.

Hi @StephenBradley and @lmori


I can confirm it is extremely unstable. I tried rolling back to the previous version using the tips of @lmori in the previous post. At first it failed and present a fatal error. I enabled the extensions the error desapeared, but I'm still getting the "stopped" status for some nodes. The curious point is that, after the rolling back, I'm still getting the 0.9.52 vrsion in MM WEBGUI.


MM VersionMM Version


Did you experienced something similar @StephenBradley ?

Best regards.



Hi @lmori 


I'm having some issues with the upload of MM to 0.9.52, as reported previously. After the preocedure you told me to follow to rool back to 0.9.50, my machine has not worked properly. First of all, it still shows I'm using the 0.9.52 version, as I showed in my last post.


I tried to repeat the procedure and now I get an error


MM AnsibleMM Ansible



Could you tell me what is the best pratice to get my MM back to work? Is is really affecting (downgrading) my experience with MM.


Best regards.

Hi @danilo.souza,

something is wrong in the downloaded code, it seems to be pretty old.


Please do this:

  • remove your current minemeld-ansible directory and /opt/minemeld/engine, /opt/minemeld/www and /opt/minemeld/prototypes directories
  • copy /opt/minemeld/local contents somewhere for backup
  • update your CentOS 7 packages to the latest versions (yum update)
  • follow the instructions here:

The new version of the playbook will install the MineMeld version that is still in beta but is way more reliable on CentOS7.



L2 Linker

Tried but all I get is this.


TASK [minemeld : minemeld-node-prototypes repo] *********************************************************************************************************************************************************************************************
ok: []

TASK [minemeld : minemeld-node-prototypes current link] *************************************************************************************************************************************************************************************
ok: []

TASK [minemeld : minemeld-core repo] ********************************************************************************************************************************************************************************************************
fatal: []: FAILED! => {"before": "bdd6879cc2c72094702d301e3e1bae4a198eed5f", "changed": false, "msg": "Local modifications exist in repository (force=no)."}
to retry, use: --limit @/opt/minemeld-ansible/local.retry

PLAY RECAP ********************************************************************************************************************************************************************************************************************************** : ok=31 changed=0 unreachable=0 failed=1

Hi @StephenBradley,

did you delete the directory /opt/minemeld/prototypes before rerunning the playbook?




Well it seemed to have gotten farther but now I get a firewall not running error.

  • 1 accepted solution
  • 40 replies
Like what you see?

Show your appreciation!

Click Like if a post is helpful to you or if you just want to show your support.

Click Accept as Solution to acknowledge that the answer to your question has been provided.

The button appears next to the replies on topics you’ve started. The member who gave the solution and all future visitors to this topic will appreciate it!

These simple actions take just seconds of your time, but go a long way in showing appreciation for community members and the LIVEcommunity as a whole!

The LIVEcommunity thanks you for your participation!