Reset Search
 

 

Article

Process mcmgr consumes high CPU

« Go Back

Information

 
TitleProcess mcmgr consumes high CPU
Symptoms
The mcmgr process is the Multicast Manager process and is responsible for coordinating all multicast traffic within EXOS. 

When a device begins to show the following log entries:
07/17/2015 09:52:28.74 <Warn:EPM.cpu> CPU utilization monitor: process mcmgr consumes 64 % CPU
07/17/2015 09:47:08.69 <Warn:EPM.cpu> CPU utilization monitor: process mcmgr consumes 62 % CPU
07/17/2015 09:46:08.71 <Warn:EPM.cpu> CPU utilization monitor: process mcmgr consumes 64 % CPU
07/17/2015 09:41:08.72 <Warn:EPM.cpu> CPU utilization monitor: process mcmgr consumes 63 % CPU
Or the show cpu-monitoring along with the top command show high CPU utilization on the mcmgr process
CPU Utilization Statistics - Monitored every 20 seconds
-----------------------------------------------------------------------

Process      5   10   30   1    5    30   1    Max           Total
            secs secs secs min  mins mins hour            User/System
            util util util util util util util util       CPU Usage
            (%)  (%)  (%)  (%)   (%)  (%)  (%)  (%)         (secs)
-----------------------------------------------------------------------

System        n/a  n/a  5.2 24.4 25.7 26.3 26.3 99.9    53.09   186565.34
aaa           n/a  n/a  0.0  0.0  0.0  0.0  0.0  3.8    26.50      51.14
acl           n/a  n/a  0.0  0.1  0.1  0.1  0.1  3.2   230.22     762.09
bfd           n/a  n/a  0.0  0.0  0.0  0.0  0.0  3.8   131.53     199.46
mcmgr         n/a  n/a  9.6 55.0 55.6 54.5 54.1 87.4 67524.66   212234.91
mpls          n/a  n/a  0.0  0.0  0.0  0.0  0.0  0.0     0.00       0.00
mrp           n/a  n/a  0.0  0.0  0.0  0.0  0.0  2.8     7.84       9.91
Environment
  • Summit
  • Blackdiamond
Cause
  • Excess multicast streams are being handled by the CPU.  These could be Multicast reports from devices using particular multicast 
Resolution
The following command outputs can help in diagnosing the source of the multicast streams that are generating the traffic:
  • show ipstats shows statistics for all packets that have been handled by the CPU.  If the InMcast counter accounts for the majority of the traffic logged, this can be a potential problem
* T77Node.5 # show ipstats
IP Global Statistics
InReceives =  954858862 InUnicast  =      88105 InBcast    =     450527
                        InMcast    =  954320230
If ipforwarding is enabled, identifying what router interface has the highest Mcast Pks count for IN  traffic will further narrow down the source of the problem
Router Interface pri-servoip-campus
     inet 172.24.64.252 netmask 255.255.255.0 broadcast 172.24.64.255
     Stats:  IN         OUT
      952214855        7389 packets
      861652293      240255 octets
      952068131        7372 Mcast pkts
         146724           0 Bcast pkts
              0           0 errors
              0           0 discards
              0             unknown protos
  • debug hal show congestion shows how many packets that were sent to the CPU have been dropped due to the CPU reaching its handling limit.  In stacks, and chassis based hardware, this output is shown for each slot in the stack or chassis.  
    
T77Node.11 # debug hal show congestion
Congestion information for slot 1 type X460-48t since last query
  No switch fabric and CPU congestion present

Congestion information for slot 2 type X460-48t since last query
  No switch fabric and CPU congestion present

Congestion information for slot 3 type X460-24p since last query
  CPU congestion present: 48610669
 


 
Additional notes

Feedback

 

Was this article helpful?


   

Feedback

Please tell us how we can make this article more useful.

Characters Remaining: 255