Kernal Panic Reporting

stutz
Contributor

Is there a command or something that can be ran to report if a computer had a kernel panic. Much like we can report on if a computer's hard drive is almost full?

I would like to create a smart group that would notify me if a user experiences a kernel panic.

1 ACCEPTED SOLUTION

mm2270
Legendary Contributor III

Like @StoneMagnet said, they should be located there. Similarly though, I have none in there to verify that.

I'm not entirely sure, but I think panic logs end with a .panic extension. At least that's what I seem to remember. If I'm correct, you should be able to write an EA to capture the number of them within a certain timeframe (just so you aren't worrying about panics that happened weeks ago for example)

#!/bin/sh

PanicLogCount=$(find /Library/Logs/DiagnosticReports -Btime -7 -name *.panic | awk 'END{print NR}')

echo "<result>$PanicLogCount</result>"

The above would give you a count of panic logs found that were created within the last 7 days. Anything older won't get included. It will typically show "0" as the result. If you save the EA as an integer type, you can create a Smart Group that would show anything with more than 0 results so you can keep track of Macs that have any kind of panic logs within the last week.
You can change the -7 to something lower or higher of course.

You might need to verify that the panic logs actually get created with the .panic extension on them. I can't verify that, so I'm just going on memory. I may be wrong about that though.

View solution in original post

5 REPLIES 5

StoneMagnet
Contributor III

@stutz Kernel panics should in theory be logged to /Library/Logs/DiagnosticReports (I say in theory because I don't currently have any logged kernel panics to verify). You could write an EA that looks for the presence of a kernel panic log in that location and trigger your smart group on that.

jhbush
Valued Contributor II

I don't recall where I picked this up, but it works for us so thank you to whoever wrote this.

#!/usr/bin/python
from datetime import datetime
import os

panic_files = os.popen("echo $(ls /Library/Logs/DiagnosticReports/*.panic)").read().split()
panic_dates = []
time_format = "%Y-%m-%d"
date_differences = []
sum = 0.0

if  panic_files == []:
    print "<result>No Panics</result>"
elif len(panic_files) == 1:
    print "<result>One Panic</result>"
else:
    for file in panic_files:
        panic_dates.append(str(file.split('_')[1].split('-')[0])+ '-' +
                           str(file.split('_')[1].split('-')[1])+ '-' +
                           str(file.split('_')[1].split('-')[2]))

    for panic_date in panic_dates:
        position = panic_dates.index(panic_date) + 1
        if panic_dates.index(panic_date) < (len(panic_dates) - 1):
            if (datetime.strptime(panic_dates[position], time_format) - datetime.strptime(panic_date, time_format)).days >= 60:
                pass
            else:
                date_differences.append((datetime.strptime(panic_dates[position], time_format) - datetime.strptime(panic_date, time_format)).days)
    date_differences.append((datetime.now() - datetime.strptime(panic_dates[-1], time_format)).days)

    for date_difference in date_differences:
        sum = sum + date_difference

    panic_average = int(round(sum / len(date_differences),0))

    if panic_average > 30:
        print "<result>Occasional Panics
Every " + str(panic_average) + " days</result>"
    elif panic_average > 13:
        print "<result>Regular Panics
Every " + str(panic_average) + " days</result>"
    else:
        print "<result>Frequent Panics
Every " + str(panic_average) + " days</result>"

mm2270
Legendary Contributor III

Like @StoneMagnet said, they should be located there. Similarly though, I have none in there to verify that.

I'm not entirely sure, but I think panic logs end with a .panic extension. At least that's what I seem to remember. If I'm correct, you should be able to write an EA to capture the number of them within a certain timeframe (just so you aren't worrying about panics that happened weeks ago for example)

#!/bin/sh

PanicLogCount=$(find /Library/Logs/DiagnosticReports -Btime -7 -name *.panic | awk 'END{print NR}')

echo "<result>$PanicLogCount</result>"

The above would give you a count of panic logs found that were created within the last 7 days. Anything older won't get included. It will typically show "0" as the result. If you save the EA as an integer type, you can create a Smart Group that would show anything with more than 0 results so you can keep track of Macs that have any kind of panic logs within the last week.
You can change the -7 to something lower or higher of course.

You might need to verify that the panic logs actually get created with the .panic extension on them. I can't verify that, so I'm just going on memory. I may be wrong about that though.

stutz
Contributor

@mm2270 Your simple script did exactly what I needed it to, thx. You were all correct about the panic logs.

They are located in: /Library/Logs/DiagnosticReports

Filename: Kernel_2017-05-12-101928_computername.panic

For those that want to test this EA out and you need to force your computer to kernel panic. I found an Apple site explaining how to do it:

https://developer.apple.com/library/content/technotes/tn2004/tn2118.html

sudo dtrace -w -n "BEGIN{ panic();}"

CasperSally
Valued Contributor II

I understand this isn't what user was asking for - but in case anyone searches for help with kernel panics I got this tip from Apple that can be helpful for gathering logs on them. These logs can be provided to apple if you have an enterprise agreement.

When user receives a kernel panic, ask them ASAP to push Control - Option - Command - Shift - Period to automatically generate a log file. Their screen with blink and then after a few minutes a folder will open with a sysdiagnose_2016_date.tar.gz file in it. The helpdesk can gather the .tar.gz file that resides in /private/var/tmp and ask user for time of crash for reference

Last time I tested above was 10.10, when we had the kernel panic issues, but hopefully it still works.