In-App Help was last updated in 8.6.4 (released 10/23/2019) if you are looking for help with a feature that has been added or enhanced since 8.6.4, please check in Online Help from Help menu.
The Acuity Interrater Reliability Report
Interrater reliability measures the degree of consistency in acuity assessments. You can run this report to evaluate interrater reliability between a caregiver acuity assessment and an audit assessment.
The report calculates percent agreement between Likert ratings by outcomes using absolute and near agreement comparisons. The report supports the audit process, an integral part of the acuity methodology measuring the degree of consistency in acuity assessments. Consistently accurate acuity assessments result in accurate workload measurements.
The report uses the following standard correlation equation.
You can sort this report by patient, assessor, or auditor and include the Likert rating by patient by outcome when the acuity level between the assessor and audit assessment is not identical. You can also select to flag which acuity assessments were generated through direct entry and which were generated through a clinical documentation interface.
Running the Acuity Interrater Reliability Report
Complete the following steps to run the Acuity Interrater Reliability report:
- From the Reports menu, select Acuity > Acuity Interrater Reliability. This opens the section criteria page.
- On the Selection Criteria tab, select one or more facilities from the Facilities section. Use CTRL+click to select multiple items. Use SHIFT+click to select a range of items. Staff Manager automatically limits the profiles to those set up for Acuity.
- Select one or more profiles from the Profiles section. Staff Manager automatically limits the profiles to those set up for Acuity.
- Enter a start date in the box or click the calendar icon to use the date menu.
- Enter the End Date in the box or click the calendar icon to use the date menu.
- Select a Sort Patient Data by option: Name, Assessor, or Auditor.
- Select an Include Assessments/Audits by Patient? option:
- Yes, all outcomes assessed and audited
- Yes, only outcomes with different Likert ratings
- No
- Select a Flag Assessments that are: option:
- Direct Entry: Direct Entry assessments are marked with (DE) in the Acuity Level column.
- From Interface: Interface assessments are marked with (IF) in the Acuity Level column.
- No flags: When the no flags option is selected, all audited assessments are included, regardless of source.
- Select an Export Type.
- If you select PDF, Staff Manager opens the report results as a PDF.
- If you select Microsoft Office Excel, Staff Manager exports the report results as an Excel spreadsheet. You must have Microsoft Office Excel or Excel Viewer installed on your workstation to use this option.
- Click Run Report. If you selected the PDF export type, the report is displayed in the Report Output pane. If you selected the Excel export type, an alert window opens to let you select whether to open the report or to save the report as an Excel spreadsheet. It is recommended that you select Save so that you can review the report in Excel.
Be aware that report formatting is not retained when you export a report to Excel.
Viewing the Acuity Interrater Reliability Report
This report calculates percent agreement using absolute and near agreement comparisons. Absolute agreement indicates the Likert ratings selected for an outcome match. Near agreement indicates the Likert ratings for the outcome were within one value of each other. Interrater reliability is met for an outcome when the sum of the percent absolute agreement and percent near agreement is 85% or higher. You can sort the assessment and audit detail by patient portion of the report by patient, assessor, or auditor.
The first section of the report is a summary including the following information.
- Date Range
- Profile: Location within Profile: Hospital Service (if applicable)
For each profile you selected, the application creates a separate report for each location defined within the profile and for each outcome set defined by hospital service.
- Total Number Assessments Audited
The total number of assessments audited for your selected location and service.
- Total Number Outcomes Audited
The total number of outcomes audited for the selected location and service if applicable. Since the report is by outcome set, each assessment audited has an identical number of outcomes.
- Acuity Match (N/%)
The number of audited assessments in which the acuity level generated by the assessment and audit were identical and the percentage of audited assessments in which the acuity level generated by the assessment and audit were identical.
- Assess Acuity > Audit Acuity (N/%)
The number of audited assessments in which the acuity level of the assessment was greater than the level generated by the audit and the percentage of audited assessments in which the acuity level of the assessment was greater than the level generated by the audit.
- Audit Acuity > Assess Acuity (N/%)
The number of audited assessments in which the acuity level of the audit was greater than the level generated by the assessment and the percentage of audited assessments in which the acuity level of the audit was greater than the level generated by the assessment.
- Acuity Level Correlation
The correlation between the acuity level generated by the assessment and audit using a standard RSQ (r squared) calculation.
The second section reviews Interrater Reliability for All NOC Outcomes in Audit, and includes the following information.
- NOC Outcomes
The list of NOC outcomes in the defined outcome set being reported.
- # Assess/Audit
The number of times each outcome was assessed and audited, with each assessment/audit combination equaling 1.
- Absolute Agreement (percentage)
Calculates the percentage of times the Likert rating for the assessor was identical to the Likert rating of the auditor.
- Near Agreement (percentage)
Calculates the percentage of times the Likert rating for the assessor was not identical to the Likert rating of the auditor but was within one Likert rating.
- Outcome Rating Correlation (percentage): The correlation between the outcome rating generated by the assessment and audit using a standard RSQ (r squared) calculation.
The third section reviews assessments and audits by Patient. This section is included in the report only when you select Include Assessments/Audits by Patient? For each patient assessed and audited, the report includes the following information.
- Patient Name
- Assessed By (with or without Assessment Flag)
The user name of the person who completed the assessment. If you selected to include assessment flags in the report, (DE) follows the user name for assessments filed by direct entry and (IF) marks assessments filed by a clinical documentation interface. If you selected the no flags option, only the user name displays in this column.
- Audited By
The user name of the person who completed the audit assessment.
- # Outcomes
The number of outcomes included in this outcome set.
- Weighted Average
The average of the weighted raw acuity scores generated by the assessment and the average of the weighted raw acuity scores generated by the audit.
- RN Acuity
The acuity level calculated by the application from the assessment.
- Audit Acuity
The acuity level calculated by the application from the audit.
- Detailed Summary
including the assessment and audit dates/times and acuity levels
Identical Likert ratings display in black, values that differ by one on the Likert scale (near agreement) display in blue, and values that differ by more than one (non-agreement) display in red. This makes it easier for you to see where there are differences between the assessor and the auditor in making acuity assessments.
For example, assume the outcome is Respiratory Status: Gas Exchange and the available Likert ratings are:
Severely Compromised, Substantially Compromised, Moderately Compromised, Mildly Compromised, and Not Compromised.
- If both the assessor and the auditor select Substantially Compromised, the Likert ratings display in black on the report, indicating absolute agreement.
- If the assessor selects Substantially Compromised and the assessor selects Moderately Compromised, the Likert rating displays in blue since the values are one value apart, indicating near agreement.
- If the assessor selects Substantially Compromised and the assessor selects Mildly Compromised, the Likert rating displays in red since the values are more than one value apart, indicating non-agreement.
Sample Report
Related Topics