Skip to content
Menu
menu

How to Improve Your Image

The great outdoors can be not-so- great for those who are setting up outdoor surveillance systems and attempting efficient perimeter detection. The equipment has to deal with weather, wind, possible tampering, and even the occasional wild animal or two. Additionally, there are trees, brush, pedestrians, and the toughest opponent of all—the darkness. Here is a look at recent trends in outdoor surveillance and perimeter protection and some best practices that can help to make the systems more effective.

Thermal Cameras
 
The lack of light outdoors at night is a thorn in the sides of integrators. It isn’t always possible to install more lighting to make visible-light cameras work better in the dark. So security managers often turn to other tools.
 
Consultants, vendors, and integrators interviewed for this article kept coming back to one major factor: the dropping price of thermal imaging cameras. Thermal imagers read heat energy coming off objects. They provide a picture of the various heat energies in a field of view.

Thermal imagers are increasingly being used in conjunction with analytics and other sensors to reduce false-alarm rates. “Deploying analytics is no substitute for having lighting or infrared illumination. If you’ve got zero lighting at night, then the best use of analytics is linked to a thermal imager,” says John Whiteman, DVTel’s vice president of strategic programs.
 
In addition, some thermal cameras are used in conjunction with visible-light cameras, although it should be noted that visible-light cameras do need some sort of light or illumination.
 
Thermal cameras are best at the initial detection, especially when the lack of light isn’t the only impediment to visibility. “There are some scenarios where a thermal camera will always be superior. You can see through fog, through rain, behind leaves of a tree. You can never do that with a regular camera, doesn’t matter how light sensitive it is,” says Fredrik Nilsson of Axis Communications.
 
While other companies are jumping into the thermal market, FLIR Systems, long the dominant market leader, is attempting to stand its ground by improving its products. FLIR is now offering a higher resolution in its thermal cameras throughout its full portfolio. Bill Klink, FLIR’s vice president for security and surveillance, says that the resolution has gone up to a 640 by 480 pixel matrix. “When compared to a visible-light camera, that’s not exciting, because they’re obviously in megapixel land. For a thermal camera, that’s a big jump. It’s four times the number of picture elements [that] had been the standard in the industry, which was 320 by 240,” says Klink.
 
The increase in the number of pixels means that you can start with a wider field-of-view lens. (This might not be the best approach, however, if analytics are being used, as discussed later.)
 
“Basically it’s more economical. You can use fewer cameras to cover the same area,” explains Klink. He adds that the resolution decreases the need for longer- focal-length lenses. This is a plus because these lenses can get very expensive in thermal imaging due to the fact that the lenses are made of rare earth materials, not glass.
 
Color Night Vision
 
Advances are also being made in visible-light cameras that work in very low light situations. In the past, these cameras have primarily been able to yield only black and white images. Now that is changing.
 
FLIR recently brought one such camera to market based on technology developed for military use and obtained through the acquisition of a company called Salvador Imaging.
 
The camera uses a sensor chip known as EMCCD (electron magnified charge coupled device). “If you multiply the electrons, you boost the camera’s ability to see in very low light conditions, and with color. And that’s really the key point here. You get good color video in almost darkness,” says Klink.
 
However, the cameras do need a bit of light in order to work, since they are still visible-light cameras, and that can still be an issue in certain areas. Klink says that these cameras might sometimes be used in conjunction with a thermal imager or some type of analytic or sensor. The cameras will range from “under $20,000” up to about $50,000, depending on the type of camera.
 
The image chips inside cameras are getting much more sensitive overall. “The good thing, at the end of the day, is that technology in general is improving,” says Nilsson. “It used to be that you had to use a CCD, the more traditional sensor for an outdoor surveillance because they’re more light sensitive.
 
“But with very fast development of the CMOS [complementary metal oxide semiconductor] sensors,” he notes, “ they are increasingly becoming very good in low-light scenarios.” That advancement will continue as well, according to Nilsson. “So even regular cameras will continue to be better and better and more and more useful in outdoor environments as well.”
 
Wireless Integration
 
Outdoor surveillance, including perimeter detection, is also benefiting from advances in wireless technology.
 
“A lot of remote outdoor sites don’t lend themselves to physical infrastructure,” says Holly Tsourides, vice president of global sales for VideoIQ, who points out that if you’re installing miles down a fence line, it might be difficult to run traditional network cable. “You’ve had a lot of historical challenges there,” she points out.
 
But with the advent of digital cameras, she says, and with wireless subscription services becoming more affordable, it’s no longer necessary to run cables to or from the cameras.
 
To complete a robust surveillance system, “you’ve got to have video analytics, and you’ve got to have storage at the edge to take advantage of a cellular system,” says Tsourides.
 
According to Tsourides, surveillance cameras need the ability to store video at the edge so that the price doesn’t run up too high because of too much data being sent. (More on edge processing later.)

Geographic Positioning

Outside surveillance is being improved further through advances in location technology, such as geotracking.
 
Paul Brewer, cofounder and vice president of technology at ObjectVideo, says these geographic tools provide a new approach to analytics. “If we have a camera, and we tell it its place in the world, now we have the ability to look at an area or street corner, so that we can make the data...searchable.... It’s not just camera number 32 anymore, it’s camera at the corner of Constitution and 23rd street, for example. And there are a number of ways we exploit that. But I think you’re going to start to see the GIS [geographic information system] data become more important in this world.”
 
Brewer adds that the use of geographic information can help with decreasing data transmitted to the back end. That’s because they can show icons on the map, and the user isn’t sent every pixel over the network.
 
Analytics
 
Intelligent video analytics added to outdoor surveillance cameras are also improving as the algorithms continue to develop. They were perhaps over-promoted in the last decade, but now the technology is helping analytics providers make good on their promises. Another positive change is that providers have learned to promise less and be more realistic. And in addition to better algorithms, other technological advances are improving surveillance and analytics options outdoors.
 
GeoRegistering. SightLogix offers what the company calls geoRegistering. According to John Romanowich, CEO of SightLogix, this means that the software uses geometry by considering the height and angle of the camera in interpreting the scene; the camera knows the location of each object in the scene. “In knowing that, we can actually infer its size very precisely. And by inferring its size very precisely, we can provide very accurate filters based upon size,” says Romanowich.
 
“So, for example, human beings are usually somewhere between three feet and seven feet tall. So if you were to say, ‘well, it’s not likely that the human being is going to be less than a foot tall’…we put a filter that says anything smaller than a foot, we’ll ignore, which means that we’ve now eliminated 90 percent of the small animals that are likely to be a problem. You might say, ‘Well, gee, what about the bigger animals?’ and my comment about the bigger animals is, you’re kind of stuck with detecting them…. You better let the person watching for awhile alert them to it and let them decide intelligently whether or not they think that’s an animal or a potential likely intruder.” 

Edge processing. Edge processing means processing video analytics and other data in the camera rather than only after it is sent over the network to a central server. There are pros and cons to doing this, and it is not optimal for all analytics applications.
 
One factor is that it adds to the cost of the camera. However, many of the sources interviewed advocate edge processing because it allows the analytics to work on the highest quality picture (rather than a compressed version sent over the network). Analytics are often only as good as the video they are using. Edge processing also saves on bandwidth and storage costs.
 
Self-calibration. One of the original problems with analytics was the time it took the end user to program the system to recognize what was a normal state of affairs and what should cause an alarm. Newer systems can learn the scene on their own and don’t need all the rules programmed into them by an integrator or security manager. This capability is called self-calibration.
 
This feature also allows the system to self correct. Thus, if the camera gets moved around a bit because of the weather, a technician doesn’t have to come back out and recalibrate it.
 
Electronic stabilization. Another issue with outdoor surveillance is that wind and the other elements can cause camera vibration. That can confuse analytics that are trying to pick up motion.
 
To combat this issue, SightLogix electronically stabilizes its cameras. “By electronically stabilizing, you now don’t have to tune back the sensitivity. You can electronically stabilize and get all that information directly from the image of targets that are moving within the scene,” says Romanowich.
 
The reason the SightLogix cameras can electronically stabilize is that they have very high processing power on the cameras. “The processors are so powerful and so fast that they are looking at every single pixel of every frame, and they are studying the global motion of the entire image. Is it rotating? Is it moving up? Is it moving down? And it can basically then stitch it all together in real time in what they call the optical flow,” says Romanowich. From frame to frame it can actually line up the images, he notes. “It would be almost like if you had a deck of cards, and you were to throw them down, they’d be all scattered apart, but if you grab them all and...put them all together, and line them back up with your hands, it’s effectively the same idea,” he explains.
 
It does cost more to have very high processing power on a camera. Romanowich says the SightLogix cameras have the equivalent of five servers worth of processing power.
 
Similarly, cloud movement and snow and rain can make the analytic think there are changes in the scene and that can make it hard to detect a person amid the changes. VideoIQ deals with this issue by including about 250,000 algorithms of what people look like so that the technology can identify an individual in the frame.

Best Practices

Intelligent video analytics purveyors have been on a public relations mission over the last few years. After much overhype and customer disappointment, companies that offer analytics are getting back to basics and trying to provide customers with the analytics they need (not necessarily the analytics they think they want) and to help them use the tools more effectively.
 
While it may not be possible to eliminate all false alarms, there are ways to decrease the number of alarms and to get more out of the analytics. In many cases, the most effective way to use analytics is in conjunction with other tools. For example, analytics can be combined with thermal imagers, as mentioned earlier, or other sensors like fence-shake alarms.
 
When the budget allows, redundancy can be set up wherein several triggers would have to go off for a video management system to consider a specific event an alarm, says Jessica Clark, executive vice president of Sigma Surveillance, which designs systems that combine various types of sensors and analytic tools to keep false alarms low. For example, a system might combine thermal cameras with sensors and analytics. If the sensor detects something, the analytic can provide further data about what it might be.
 
“In many high-risk critical-infrastructure power-generating utility-type of environments, we’re seeing a layered approach to security. So whether it’s ground sensors or fence-shake sensors, thermal-imaging cameras and video analytics, you’re typically seeing in those deployments multiple types of sensors as fail-safes,” concurs Whiteman.
 
Field of view. With higher pixel amounts in newer high-definition cameras, installations can be designed to use fewer cameras that take in a wider field of view, the theory being that the user can zoom in more given the higher resolution image. Not everyone thinks that this is the best way to use analytics, however, because it sometimes forces the analytic to try to interpret too large a scene.
 
Sean Ahrens, CPP, project manager with Aon Corporation, recommends having a more narrowly defined field of view to make analytics more effective. “The wider the field of view, the more objects, the less the video analytics is going to be effective. The more defined the field of view, now we’ve got opportunities to detect patterns,” says Ahrens.
 
Ahrens and others also advise that if there is a particular area you are concerned about, such as a fence line, you should do what you can to clear obstructions away from it. “The key thing is about identifying fields of view that are clear of obstructions, that are clear of opportunities for false alarm, such as...a flag blowing” says Ahrens. He notes that the system can be trained to filter some things out, but that takes time.
 
Whiteman agrees. “If you want to detect an intruder hopping a fence, you typically would want to have a clear shot of five feet roughly on each side of the fence. Where an analytic can get into trouble is if that fence is crowded by trees and brush, and an intruder hops the fence, the algorithm really didn’t have time to detect that individual upon the approach.”
 
Good image capture. Another important point with analytics is that the program can’t interpret information it does not get. That means the system first has to be properly configured with a good camera and the proper light so that a clear, sharp image of the scene is captured. It doesn’t matter what the analytic is if the camera image is not good.
 
Positioning. When designing the system, it’s also important to consider placement both with regard to lighting and how the analytic will function. Romanowich says that one good way to measure the real-world efficiency of a camera’s analytics is to see how it works when a target is moving toward the camera. It is easier for the system to detect an intruder walking across the path of a camera due to all of the motion. He says it’s important for manufacturers and integrators to be honest and specific about what the camera’s analytics can detect and what the range of detection will be. And that data can then be factored into how the system is designed for maximum performance in the field.
 
Height and distance are also factors to consider when placing cameras outside, and that’s especially important for analytics, says Whiteman, who explains that the system designer must consider the angle of view that will give the camera the proper perspective for viewing, say, a person approaching from 50 feet away.
 
Whiteman says that general surveillance cameras might be installed anywhere from eight to 10 feet up, but his company’s analytics use a three-dimensional depth perspective and require a higher mount for a comparable distance to differentiate between people, animals, vehicles, and other objects. Whiteman recommends a minimum mounting height of 12 to 15 feet for DVTel’s analytics camera to get a field-of-view perspective that might go out to 250 feet.
 
“We typically don’t want to be looking straight down at an object. And that’s what we mean by perspective,” says Whiteman. “So we typically like about a 20 degree camera angle to the target.”
 
Users must be familiar with the camera’s abilities so that they can determine the proper height and angle to achieve their objectives. However, Romanowich stresses that users must be aware that there can end up being a large space underneath the camera that remains unsurveilled. The system’s designer and installer should test different angles to decrease the space that is not visible to the camera. They may need to have another camera that is positioned elsewhere to capture that space.
 
By following these types of best practices in designing and installing outdoor systems, companies are getting better performance with analytics, notes Bill Bozeman, CEO of PSA Security Network. “Returns were as high as 60 percent two or three years ago. Our returns right now are miniscule; they’re approaching every other product we sell,” he says.
 
As various tools and offerings for outdoor surveillance improve, it’s still most important for integrators and end users to work together in finding the best offering for each specific installation. “There’s nothing like collaboration, direct collaboration with your customers,” says Clark. “That will always make you successful, if you have a good partnership with your customers, if you go into it not as a job but as a partnership in security.”
 

Laura Spadanuta is an associated editor at Security Management.
arrow_upward