Home > Research

Automated Observer Siting on Terrain, Showing Intervisibility

Intro

Note This page is somewhat out of date. The cited papers and talks are more recent.

This page demonstrates some results from my suite of programs to select a group of observers to cover as much as possible of an elevation cell. The unique feature of this work is the speed at which it can process large cells, while maintaining intervisibility. We are now building on this efficient base to analyze the effects of algorithm and data errors on the correctness of the joint computed observer viewsheds.

Consider a terrain elevation database, and an observer, O. Define the viewshed as the terrain visible from O within some distance R of O. The observer might be situated at a certain height above ground level, and might also be looking for targets at a certain height above the local ground. Also, define the visibility index of O as the fraction of the points within distance R of O that are visible from O. Our work combines a fast viewshed algorithm with an approximate visibility index algorithm, to site multiple observer so as to jointly cover as much terrain as possible.

This multiple observers case is particularly interesting and complex, and has many applications. A cell phone provider wishes to install multiple towers so that at least one tower is visible (in a radio sense) from every place a customer's cellphone might be. Here, the identities of the observers of highest visibility index are of more interest than their exact visibility indices, or than the visibility indices of all observers. One novel future application of siting radio transmitters will occur when the moon is settled. The moon has no ionosphere to reflect signals, and no stable satellite orbits. The choices for long-range communication would seem to include either a lot of fiber optic cable or many relay towers. That solution is the multiple observer visibility problem.

As another example, a military planner needs to put observers so that there is nowhere to hide that is not visible from at least one. This leads to a corollary application, where the other side's planner may want to analyze the first side's observers to find places to hide. In this case, the problem is to optimize the targets' locations, instead of the observers'.

Again, a planner for a scenic area may consider each place where a tourist might be to be an observer, and then want to locate ugly infrastructure, such as work yards, at relatively hidden sites. We may wish site a forest clearcut to be invisible to observers driving on a highway sited to give a good view. Finally, an architect may be trying to site a new house while following the planning board's instruction that, You can have a view, but you can't be the view.

Speed of execution on large datasets is of more importance than may be apparent. Many prototype implementations, altho demonstrated on small datasets, do not scale up well. Some preliminary published algorithms may even be exponential if performing a naive search. Therefore, we strive for the best time possible.

In addition, large datasets may contain cases, which did not occur in the small test sets, that require tedious special programming by the designer. In a perfect software development process, all such cases would have been theoretically analyzed and treated. However, in the real world, testing on the largest available datasets increases some confidence in the program's correctness.

Next, a large enough quantitative increase in execution speed leads to a qualitative increase in what we can do. Only if visibility can be computed efficiently, can it be used in a subroutine that is called many times, perhaps as as part of a search, to optimize the number of observers. This becomes more important when a more realistic function is being optimized, such as the total cost. E.g., for radio towers, there may be a tradeoff between a few tall and expensive towers, and many short and cheap ones. Alternatively, certain tower locations may be more expensive because of the need to build a road. We may even wish to add redundancy so that every possible target is visible from at least two observers. In all these cases, where a massive search of the solution space is required, success depends on each query being as fast as possible.

Finally, altho the size of available data is growing quickly, it is not necessarily true that available computing power is keeping pace. There is a military need to offload computations to small portable devices, such as a Personal Digital Assistant (PDA). A PDA's computation power is limited by its battery, since, approximately, for a given silicon technology, each elemental computation consumes a fixed amount of energy. Batteries are not getting better very quickly; increasing the processor's cycle speed just runs down the battery faster.

There is also a compounding effect between efficient time and efficient space. Smaller data structures fit into cache better, and so page less, which reduces time. The point of all this is that efficient software is at least as important now as ever.

Our computations are performed on a full resolution cell. That is, on a 1201x1201 cell, the visibility indices of all 12012 points are estimated. For each tentative observer, as accurate a viewshed as possible is computed. This point is important since some other systems appear to compute low-resolution approximations to viewsheds.

Our system scales up well. Elevations cells of resolution over 2000x2000 are no problem.

Purpose of This Page

The purpose of this demo page is to demonstrate our capability in observer siting. This code is compact, under 1000 lines of executable C++. It is also very efficient, using 2 CPU minutes to site the observers in each of the two examples above (with or w/o intervisibility). The execution time increases with the radius of interest. The time includes computing the 12012 estimated visibility indices, computing 1000 viewsheds, and greedily selecting the best observers by unioning many thousands of viewshed combos and computing their areas.

How efficient all this is can be seen from the fact that using the Netpbm and ImageMagick image processing packages to compute the images shown here, from the cumulative viewshed files, takes ten times as long as it takes us to compute those cumulative viewshed files.

Lake Champlain W Example

The sample cell used here is the USGS level-1 Lake Champlain West DEM. It has elevations ranging from the highest point in New York State, down to a large lake only 200 ft above sea level. Here is the cell (the image is scaled down by half to fit the display). North is to the right. The shading was done with Povray.

This test uses a radius of interest of 100 postings, and an observer and target height of 30 meters.

The time on a dual Xeon to site the observers was about 2 minutes. Actually, I do most computations on my laptop.

These images are a montage of some of the viewsheds of tentative observers. Observe the level of detail. Compare this to the viewsheds as determined by other systems.

Since the montage images are scaled down, here is one viewshed at double full resolution, to show the detail.

Successively Adding Observers, with Intervisibility

The following is a montage of the increasing visible area in the cell as more and more observers are sited. To save space, we generally show a new image only after 3 new observers.

We added a new constraint: each new observer must be visible to at least one observer that has already been added. Therefore all the observers can communicate with line-of-sight radio.

The images also have black dots for the observer positions, perimeter circles at a distance of the ROI from each included observer, and heavy black lines joining each pair of visible observers.

Next here is one cumulative viewshed at its original resolution.

Next, here is a video showing the observers being added.

Successively Adding Observers, Without Intervisibility

If we don't want to enforce the intervisibility constraint, then fewer observers are necessary to cover the same area. Here is a montage showing how the observers are now added.

We see that there is still a little intervisibility. Next, here is a video showing the observers being added. Indeed, the image is getting darker at the end because of the lines showing the intervisibility relations.

Efficiency

The system is very fast (a few minutes for a level 1 DEM). Indeed generating the images took several times longer than siting the observers.

The system takes advantage of multiple processors. On a dual processor Xeon system with hyperthreading, 4-way parallelism is used for most of the computations (altho a hyperthreaded processor runs only about 30% faster than a non-hyperthreaded one.)

Efficiency is more important than some people realize. A sufficient quantitative improvement in speed leads to a qualitative change in what can be done. More experiments become possible. The efficient system can become a base on which to build new, higher level, tools.

Current Work

We are now studying the effect of data and algorithm errors on the correctness of the computed observers' joint viewshed. Let C be the cumulative viewshed that can be seen by O, the optimal set of observers chosen using perfect data and algorithms. However, in the real world, because of the algorithm and data approximations, our program has selected O', a suboptimal set of observers. The important question is, How much worse is O' than O? That is, let C' be the cumulative viewshed of O'. How much smaller is C' than C?

Correctly answering this question requires a careful attention to definitions. Altho O' has been computed using the approximate data and algorithms, the proper approach is to compute C' from O' using the best available data and algorithms.

A related question is, How good is a completely random set of observers? The preliminary answer is, Surprisingly good. This sounds bad, but is, in fact, good. It means that good and fast approximate algorithms are possible.

Uncertainty and Path Planning are Other Possible Extensions

Various extensions are feasible. They include these:

  1. Uncertainty determination, and sensitivity analysis to errors in the input and algorithm.
  2. Planning a path between two points, either to maximize or to minimize the visibility.
  3. Determination of the integrated visibility, summed over the observer's traversal of that path.
  4. Connectivity analysis, since many small and separate hidden areas may be less serious than one large one.

For More Information