Skip to content
Menu
menu

Tracking Performance Trends

On December 24, 2003, a woman broke into an exhibit case in Kentucky’s Owensboro Medical Health System and stole a case of 50 antique glass eyes. The theft was an unwelcome Christmas present that could’ve been a black eye for the hospital, but fortunately, the security team had the right detection measures in place. The woman, who had the unlikely but appropriate name of Wink, was recorded stealing the goods by the hospital’s CCTV cameras and was quickly caught.

Apprehensions are one mark of the security department’s effectiveness. But the security department at the Owensboro Medical Center—which has some 447 beds and which handled more than 60,000 emergency-room visits last year—wanted a more comprehensive way to measure its performance on a day-to-day basis. It chose as its metric average hours per incident.

Selecting an indictor. In developing a system for looking at how well security resources are deployed and how effective they are, the first challenge was identifying what exactly should be monitored. While security incidents are easy to count, we wanted to go beyond whether incidents were trending up or down. We also wanted to go beyond simply looking at whether costs per square feet were up or down.

The goal was to select and define an indicator that could be used to measure the level of security and the effectiveness of preventive activities. The indicator chosen was time per incident.

The first step was to quantify the time devoted to each reported incident as a way to establish a baseline for security coverage. As the security supervisor, I planned to correlate each new measurement against this baseline as a workable measure of security performance.

Measurement components. There are two components of the performance measurement. First is the hours devoted to security. This factor only includes regular and overtime hours that the security staff is actually working—it doesn’t include any other hours, such as vacation or sick time.

Second are the incidents and activities themselves. In a healthcare setting, incidents might include disturbances caused by visitors or patients, medical detentions, or safety-related occurrences such as fire drills. A comprehensive risk assessment will help define the types of incidents a facility will need to track.

Activities may encompass routine duties that security staff carry out, such as patrolling the grounds, escorting visitors, or bringing articles to or from the safe. All of these specific incident responses and routine activities are collectively called incidents for simplicity sake throughout this article.

To determine a measure of performance, the total number of security hours was correlated to the number of incidents to provide a ratio of hours to the total number of tasks completed. This is not a measure of the amount of time devoted to each security assignment—which can range from a few minutes for a safe run to a full shift for an officer sitting with a detained patient—rather, it is a global statistical ratio of total hours worked to total security actions handled.

Graphing results. By graphing this relationship of total hours to total incidents each month, we developed a curve that represented a level of performance for the facility. While I can't go into the specifics from my own organization for confidentiality reasons, the point is illustrated with two years of hypothetical numbers. 

Year 1 (see chart) shows typical statistics for a facility with a security staff of about 10 full-time officers with a representative number of incidents recorded each month during the year. You can see that towards the end of the year there is an alarming downward trend in the curve; that is, there were fewer hours spent on each incident. 

There were several possible explanations for this. For example, the fictional organization might have been expanding, such as by adding a new medical office building. As a result, officers would have had more areas to patrol.

Perhaps the hours of outpatient services were extended as well, meaning that there were more people in the building than in earlier months. Since the number of security officers remained the same despite the larger facility and the extended hours, there would have been more incidents to respond to within the same time frame, thus causing the downward trend.

Benchmark. At Owensboro, we chose a baseline of 12 hours per incident. Because the system was still under development, this number was chosen provisionally after reviewing the existing data. It served as a benchmark against which future data could be analyzed.

If this number proved to be off the mark as a reasonable baseline, we could adjust it later. But as long as it was the baseline, the goal would be to track trends against this number, and where the results rose or fell, to find out why and to take steps to reallocate resources so that the average hours per incident would stay in the range of 12.

If the number of hours per incident rose, that might indicate that we had a reduction in the number of incidents. Alternatively, it might simply be because more hours were available thanks to overtime or fewer sick days. We analyzed the data each month to determine the underlying cause of the shift and to put the findings into proper context for our own use and for management.

When hours per incident are up, the security department can reallocate resources to improve overall performance. For example, security officers could be directed to devote more time to making rounds, thus providing a more visible presence to deter crime. Additionally, they could be more available to defuse potentially volatile situations before they could escalate, and to work closely with the public, patients, families, and visitors to increase customer satisfaction by attending to their needs, such as escorting visitors or staff to parking areas.

Conversely, if security hours decrease or incidents increase, the number of hours per incident will decline, as happens in the example chart. By examining the underlying data about incidents and staff time, the security department can assess the cause and take corrective action or use the numbers to justify a request for more staff.

In our case, we were expanding the facility, and our analysis showed that the addition of one-half full-time employee (FTE) to patrol the added space would bring our hours per incident back into compliance. This calculation showed a whole FTE was not necessary, particularly when an adjustment in fixed factors was made, such as a revision of lockdown procedures and the installation of new cameras and signage in the new medical office building. Not having to hire a full FTE would save the department money, but because the metrics showed that we were maintaining our benchmark goal, we knew that we were not sacrificing the level of security in the process.

It’s interesting to note that if we had used the more traditional indicators such as hours per square foot, we could have argued that the facility needed a whole FTE as opposed to one-half FTE. By using the performance measurement formula, and making improvements in fixed security factors, our goal was obtainable while still keeping within budget constraints.

The increase in security coverage raised the curve back to the desired security level even though there were actually more incidents reported in some months. The Year 2 graph shows how implementing this type of improvement plan could affect the numbers.

Working with this model over the past couple of years has helped us to establish the appropriate staffing levels for the area we presently cover. As we expand our medical office areas and build a new cancer treatment center, we will continually reevaluate our staffing requirements.

PDCA. Creating a system to benchmark security performance was an important element, but it was only part of our overall solution. Our facility uses the Plan-Do-Check-Act (PDCA) cycle for performance improvement to comply with Joint Commission on Accreditation of Healthcare Organizations’ performance standards. Our PDCA performance improvement model was developed as follows.

Plan. Our plan was to monitor the level of our security by trending the number of hours as a function of total security responses to determine a level of security performance, with a goal of maintaining an average of 12 hours per incident.

Do. Officers fill out a security incident report for each security incident. This report describes the security incident, the actions taken by officers, and the results of that action. This security log is put in a box in the security office, and subsequent security shifts review it to see what’s going on in the facility.

We expanded our camera system and redirected several cameras. We enhanced security by securing access to the building after hours, and we are reviewing our lockdown procedures as they apply to both staff and visitors. We are currently upgrading our badge-access entry points to the building to limit access to the building during off hours.

We created a dedicated security office near the ER from which to centralize security operations. And a security officer now makes a proactive effort to reduce security incidents by making a presentation at new employee orientation about parking and personal security habits.

Check. We checked our progress by using the security incident reports as source documents for reporting all incident statistics to the Environment of Care committee each month and at year-end. This information is graphed along with hourly payroll statistics to allow us to see our progress.

Act. We acted on the results by changing coverage and modifying protocols as required to meet these issues. We adjusted our staffing levels to accommodate our new service offerings and expanded facilities.

The final piece consisted of reporting our performance to the Environment of Care Committee and including the performance results in the annual security evaluation submitted to the hospital’s governing body each year.

What’s ahead. Despite the benchmarking tool’s effectiveness so far, it’s still in its formative stages. One thing that has become clear is that not all incidents are the same, so there needs to be a way to weigh each one and to add those weighted values to the mix. This is an effort I am working on presently.

For now the tool allows us to benchmark our security performance, and it gives us a way of communicating to management what level of security is being provided. It also provides a basis for funding requests in an era of increased competition for available resources.

The net outcome is that we now have a much better confidence level in our security coverage because we have a simple method of visually presenting our level of security that management and security staff can identify with, and one that helps justify requests for security enhancements when new security challenges arise. 

Stephen Wall supervises security and communications at Owensboro Medical Health System in Owensboro, Kentucky, which services western Kentucky and southern Indiana. He has nine years of experience in coordinating environment-of-care issues for their facility.

arrow_upward