- 快召唤伙伴们来围观吧
- 微博 QQ QQ空间 贴吧
- 文档嵌入链接
- 复制
- 微信扫一扫分享
- 已成功复制到剪贴板
024-Mobile Robot Positioning:Sensors and Techniques
展开查看详情
1 . Invited paper for the Journal of Robotic Systems, Special Issue on Mobile Robots. Vol. 14 No. 4, pp. 231 – 249. Mobile Robot Positioning & Sensors and Techniques by J. Borenstein1, H.R. Everett2, L. Feng3, and D. Wehe4 ABSTRACT Exact knowledge of the position of a vehicle is a fundamental problem in mobile robot appli- cations. In search for a solution, researchers and engineers have developed a variety of systems, sensors, and techniques for mobile robot positioning. This paper provides a review of relevant mobile robot positioning technologies. The paper defines seven categories for positioning sys- tems: 1. Odometry; 2. Inertial Navigation; 3. Magnetic Compasses; 4. Active Beacons; 5. Global Positioning Systems; 6. Landmark Navigation; and 7. Model Matching. The characteristics of each category are discussed and examples of existing technologies are given for each category. The field of mobile robot navigation is active and vibrant, with more great systems and ideas being developed continuously. For this reason the examples presented in this paper serve only to represent their respective categories, but they do not represent a judgment by the authors. Many ingenious approaches can be found in the literature, although, for reasons of brevity, not all could be cited in this paper. 1) (Corresponding Author) The University of Michigan, Advanced Technologies Lab, 1101 Beal Avenue, Ann Arbor, MI 48109-2110, Ph.: 313-763-1560, Fax: 313-944-1113. Email: johannb@umich.edu 2) Naval Command, Control, and Ocean Surveillance Center, RDT&E Division 5303, 271 Catalina Boulevard, San Diego, CA 92152-5001, Email: Everett@NOSC.MIL 3) The University of Michigan, Advanced Technologies Lab, 1101 Beal Avenue, Ann Arbor, MI 48109- 2110, Email: Feng@engin.umich.edu 4) The University of Michigan, Dept. of Nuclear Engineering and Radiological Sciences, 239 Cooley Bldg., Ann Arbor, MI 48109, Email: dkw@umich.edu
2 . 1. INTRODUCTION This paper surveys the state-of-the-art in sensors, systems, methods, and technologies that aim at finding a mobile robot’s position in its environment. In surveying the literature on this subject, it became evident that a benchmark-like comparison of different approaches is difficult because of the lack of commonly accepted test standards and procedures. The research platforms used differ greatly and so do the key assumptions used in different approaches. Further challenges arise from the fact that different systems are at different stages in their development. For exam- ple, one system may be commercially available, while another system, perhaps with better per- formance, has been tested only under a limited set of laboratory conditions. For these reasons we generally refrain from comparing or even judging the performance of different systems or tech- niques. Furthermore, we have not tested most of the systems and techniques, so the results and specifications given in this paper are derived from the literature. Finally, we should point out that a large body of literature related to navigation of aircraft, space craft, or even artillery addresses some of the problems found in mobile robot navigation (e.g., [Farrell, 1976; Battin, 1987]. How- ever, we have focused our survey only on literature pertaining directly to mobile robots. This is because sensor systems for mobile robots must usually be relatively small, lightweight, and inex- pensive. Similarly we are not considering Automated Guided Vehicles (AGVs) in this article. AGVs use magnetic tape, buried guide wires, or painted stripes on the ground for guidance. These vehicles are thus not freely programmable and they cannot alter their path in response to external sensory input (e.g., obstacle avoidance). However, the interested reader may find a sur- vey of guidance techniques for AGVs in [Everett, 1995]. Perhaps the most important result from surveying the literature on mobile robot positioning is that, to date, there is no truly elegant solution for the problem. The many partial solutions can roughly be categorized into two groups: relative and absolute position measurements. Because of the lack of a single good method, developers of mobile robots usually combine two methods, one from each group. The two groups can be further divided into the following seven categories: I: Relative Position Measurements (also called Dead-reckoning) 1. Odometry 2. Inertial Navigation II: Absolute Position Measurements (Reference-based systems) 3. Magnetic Compasses 4. Active Beacons 5. Global Positioning Systems 6. Landmark Navigation 7. Model Matching 2. REVIEW OF SENSORS AND TECHNIQUES In this Section we will review some of the sensors and techniques used in mobile robot posi- tioning. Examples of commercially available systems or well-documented research results will also be given. 2
3 . 2.1 Odometry Odometry is the most widely used navigation method for mobile robot positioning; it provides good short-term accuracy, is inexpensive, and allows very high sampling rates. However, the fundamental idea of odometry is the integration of incremental motion information over time, which leads inevitably to the unbounded accumulation of errors. Specifically, orientation errors will cause large lateral position errors, which increase proportionally with the distance traveled by the robot. Despite these limitations, most researchers agree that odometry is an important part of a robot navigation system and that navigation tasks will be simplified if odometric accuracy can be improved. For example Cox [1991], Byrne et al. [1992], and Chenavier and Crowley [1992], propose methods for fusing odometric data with absolute position measurements to ob- tain more reliable position estimation. Odometry is based on simple equations (see [Borenstein et al., 1996a]), which hold true when wheel revolutions can be translated accurately into linear displacement relative to the floor. However, in case of wheel slippage and some other more subtle causes, wheel rotations may not translate proportionally into linear motion. The resulting errors can be categorized into one of two groups: systematic errors and non-systematic errors [Borenstein and Feng, 1996]. System- atic errors are those resulting from kinematic imperfections of the robot, for example, unequal wheel diameters or uncertainty about the exact wheelbase. Non-systematic errors are those that result from the interaction of the floor with the wheels, e.g., wheel slippage or bumps and cracks. Typically, when a mobile robot system is installed with a hybrid odometry/landmark navigation system, the density in which the landmarks must be placed in the environment is determined em- pirically and is based on the worst-case systematic errors. Such systems are likely to fail when one or more large non-systematic errors occur. 2.1.1 Measurement of Odometry Errors One important but rarely addressed difficulty in mobile robotics is the quantitative measure- ment of odometry errors. Lack of well-defined measuring procedures for the quantification of odometry errors results in the poor calibration of mobile platforms and incomparable reports on odometric accuracy in scientific communications. To overcome this problem Borenstein and Feng [1995] developed a method for quantitatively measuring systematic odometry errors and, to a limited degree, non-systematic odometry errors. This method, called University of Michigan Benchmark (UMBmark) requires that the mobile robot be programmed to follow a pre- programmed square path of 4+4 m side-length and four on-the-spot 90-degree turns. This run is to be performed five times in clockwise (cw) and five times in counter-clockwise (ccw) direction. When the return position of the robot as computed by odometry is compared to the actual re- turn position, an error plot similar to the one shown in Figure 1 will result. The results of Figure 1 can be interpreted as follows: & The stopping positions after cw and ccw runs are clustered in two distinct areas. & The distribution within the cw and ccw clusters are the result of non-systematic errors. How- ever, Figure 1 shows that in an uncalibrated vehicle, traveling over a reasonably smooth con- crete floor, the contribution of systematic errors to the total odometry error can be nota- 3
4 . bly larger than the contribution of non-systematic errors. The asymmetry of the centers of gravity in cw and ccw results from the dominance of two types of systematic errors, collectively called Type A and Type B [Borenstein and Feng, 1996]. Type A errors are defined as orientation errors that reduce (or increase) the amount of rotation of the robot during the square path experiment in both cw and ccw direction. By contrast, Type B errors reduce (or increase) the amount of rotation when traveling in cw but have the opposite ef- fect when traveling in ccw direction. One typical source for Type A errors is the uncertainty about the effective wheelbase; a typical source for Type B errors is unequal wheel diameters. After conducting the UMBmark experiment a single numeric value that expresses the odometric accuracy (with respect to systematic errors) of the tested vehicle can be found from [Borenstein and Feng, 1996]: Y [mm] cw cluster Emax,syst = max(rc.g.,cw ; rc.g.,ccw) . (1) Center of gravity 100 xc.g.,cw of cw runs where 2 2 50 r c.g.,cw = ( x c.g.,cw ) + ( y c.g.,cw ) X [mm] and -50 50 100 150 200 250 2 2 -50 r c.g.,ccw = ( x c.g.,ccw ) + ( y c.g.,ccw ) . rc. Center of gravity g., cc w of ccw runs Based on the UMBmark test, Borenstein -100 yc.g.,ccw and Feng [1995; 1996] developed a calibra- -150 tion procedure for reducing systematic odometry errors in differential drive vehi- -200 xc.g.,ccw cles. In this procedure the UMBmark test is ccw performed five times in cw and ccw direc- cluster -250 tion to find xc.g.,cw and xc.g.,ccw. From a set of \book\deadre41.ds4, .WMF, 07/19/95 equations defined in [Borenstein and Feng, Figure 1: Typical results from running UMBmark (a 1995; 1996] two calibration constants are square path run five times in cw and five times in ccw directions) with an uncalibrated TRC LabMate robot. found that can be included in the basic odometry computation of the robot. Application of this procedure to several differential-drive platforms resulted consistently in a 10- to 20-fold reduction in systematic errors. Figure 2 shows the result of a typical calibration session. Emax,sys The results for many runs calibration sessions with TRC’s LabMate robots averaged Emax,sys = 330 mm for uncalibrated vehicles and Emax,sys = 24 mm after calibration. 2.1.2 Measurement of Non-Systematic Errors Borenstein and Feng [1995] also proposes a method for measuring non-systematic errors. This method, called extended UMBmark, can be used for comparison of different robots under similar conditions, although the measurement of non-systematic errors is less useful because it depends strongly on the floor characteristics. However, using a set of well-defined floor irregularities and 4
5 .the UMBmark procedure, the susceptibility Y [mm] of a differential-drive platform to non- systematic errors can be expressed. Experi- 100 mental results from six different vehicles, which were tested for their susceptibility to 50 Center of gravity of cw runs, non-systematic error by means of the ex- after correction X [mm] tended UMBmark test, are presented in -50 50 100 150 200 250 Borenstein and Feng [1994]. -50 Center of gravity of ccw runs, Borenstein [1995] developed a method after correction for detecting and rejecting non-systematic -100 Before correction, cw Before correction, ccw odometry errors in mobile robots. With this After correction, cw method, two collaborating platforms con- -150 After correction, ccw tinuously and mutually correct their non- systematic (and certain systematic) odome- -200 try errors, even while both platforms are in motion. A video entitled “CLAPPER” -250 \book\deadre81.ds4, .wmf, 07/19/95 showing this system in operation is included Figure 2: Position errors after completion of the bi- in [Borenstein et al., 1996b]) and in [Boren- directional square-path experiment (4 x 4 m). stein 1995v]). A commercial version of this robot, shown in Figure 3, is now available from [TRC] under the name “OmniMate.” Because of its internal odometry error correction, the Om- niMate is almost completely insensitive to bumps, cracks, or other irregularities on the floor [Borenstein, 1995; 1996]. 2.2 Inertial Navigation Inertial navigation uses gyroscopes and accelerometers to measure rate of rotation and accel- eration, respectively. Measurements are integrated once (or twice, for accelerometers) to yield position. Inertial navigation systems have the advantage that they are self-contained, that is, they don't need external references. However, inertial sensor data drift with time because of the need to integrate rate data to yield position; any small constant error increases without bound after integration. Inertial sen- sors are thus mostly unsuitable for accurate positioning over an extended period of time. 2.2.1 Accelerometers Test results from the use of accelerometers for mobile robot navigation have been generally Figure 3: The OmniMate is a commercially available fully omni- poor. In an informal study at directional platform. The two linked “trucks” mutually correct their odometry errors. 5
6 .the University of Michigan it was found Table I: Selected specifications for the Andrew Autogyro that there is a very poor signal-to-noise Navigator (Courtesy of [Andrew Corp].) ratio at lower accelerations (i.e., during Parameter low-speed turns). Accelerometers also Value Units suffer from extensive drift, and they are Input rotation rate 100 (/s sensitive to uneven ground because any Instantaneous bandwidth 100 Hz disturbance from a perfectly horizontal Bias drift (at stabilized 0.005 (/s rms position will cause the sensor to detect temperature) RMS 18 (/hr rms a component of the gravitational accel- Temperature range Operating -40 to +75 (C eration g. One low- cost inertial navi- Storage -50 to +85 (C gation system aimed at overcoming the Warm up time 1 s latter problem included a tilt sensor Size 115+90+41 mm [Barshan and Durrant-Whyte, 1993; (excluding connector) 4.5+3.5+1.6 in 1995]. The tilt information provided by Weight (total) 0.25 kg 0.55 lb the tilt sensor was supplied to the ac- <2 W Power Analog celerometer to cancel the gravity com- Power Digital <3 W ponent projecting on each axis of the accelerometer. Nonetheless, the results obtained from the tilt-compensated system indicate a position drift rate of 1 to 8 cm/s (0.4 to 3.1 in/s), depending on the frequency of acceleration changes. This is an unacceptable error rate for most mobile robot applications. 2.2.2 Gyroscopes Gyroscopes (also known as “rate gyros” or just “gyros”) are of particular importance to mobile robot positioning because they can help compensate for the foremost weakness of odometry: in an odometry-based positioning method, any small momentary orientation error will cause a con- stantly growing lateral position error. For this reason it would be of great benefit if orientation errors could be detected and corrected immediately. Until recently, highly accurate gyros were too expensive for mobile robot applications. For example, a high-quality inertial navigation system (INS) such as those found in a commercial airliner would have a typical drift of about 1850 meters (1 nautical mile) per hour of operation, and cost between $50K and $70K [Byrne et al., 1992]. High-end INS packages used in ground applications have shown performance of better than 0.1 percent of distance traveled, but cost in the neighborhood of $100K to $200K, while lower performance versions (i.e., one percent of distance traveled) run between $20K to $50K [Dahlin and Krantz, 1988]. However, very recently fiber-optic gyros (also called “laser gyros”), which are known to be very accurate, have fallen dramatically in price and have become a very attractive solution for mobile robot navigation. One commercially available laser gyro is the “Autogyro Navigator” from Andrew Corp. [ANDREW], shown in Figure 4. It is a single-axis interferometric fiber-optic gyroscope (see [Everett, 1995] for technical details) based on polarization-maintaining fiber and precision fiber- 6
7 .optic gyroscope technology. Technical specifications for Andrew's most recent model, the Auto- gyro Navigator, are shown in Table I. This laser gyro costs under $1,000 and is well suited for mobile robot navigation. 2.3 Magnetic Compasses Vehicle heading is the most significant of the navigation parameters (x, y, and ) in terms of its influence on accumulated dead-reckoning errors. For this reason, sensors which provide a measure of absolute heading are extremely important in solving the navigation needs of autono- mous platforms. The magnetic compass is such a sensor. One disadvantage of any magnetic compass, however, is that the earth's magnetic field is often distorted near power lines or steel structures [Byrne et al., 1992]. This makes the straightforward use of geomagnetic sensors diffi- cult for indoor applications. Based on a variety of physical effects related to the earth's magnetic field, different sensor systems are available: & Mechanical magnetic compasses. & Fluxgate compasses. & Hall-effect compasses. & Magnetoresistive compasses. & Magnetoelastic compasses. The compass best suited for use with mobile robot applications is the fluxgate compass. When maintained in a level attitude, the fluxgate compass will measure the horizontal component of the earth's magnetic field, with the decided advantages of low power consumption, no moving parts, intol- erance to shock and vibration, rapid start-up, and relatively low cost. If the vehicle is expected to operate over uneven terrain, the sensor coil should be gimbal-mounted and mechanically dampened to prevent serious errors introduced by the vertical component of the geomagnetic field. Example: KVH Fluxgate Compasses KVH Industries, Inc., Middletown, RI, offers a complete line of fluxgate compasses and related accessories, ranging from inexpensive units tar- geted for the individual consumer up through so- phisticated systems intended for military applica- tions [KVH]. The C100 COMPASS ENGINE shown in Figure 5 is a versatile, low-cost (less than $700) developer's kit that includes a microproces- sor-controlled stand-alone fluxgate sensor subsys- tem based on a two-axis toroidal ring-core sensor. Figure 4: The Andrew AUTOGYRO Navigator. (Courtesy of [Andrew Corp].) 7
8 . Table II: Technical specifications for the KVH C-100 fluxgate compass. (Courtesy of [KVH]). Parameter Value Units Resolution ±0.1 ( Accuracy ±0.5 ( Repeatability ±0.2 ( Size 46 + 110 mm 1.8 + 4.5 in Weight (total) 62 gr 2.25 oz Power: Current drain 0.04 A Supply voltage 8-18 or 18-28 V Figure 5: The C-100 fluxgate compass engine. (Courtesy of [KVH].) Two different sensor options are offered with the C100: (1) The SE-25 sensor, recommended for applications with a tilt range of 16 degrees, and (2) the SE-10 sensor, for applications an- ticipating a tilt angle of up to 45 degrees. The SE-25 sensor provides internal gimballing by floating the sensor coil in an inert fluid in- side the lexan housing. The SE-10 sensor provides a two-degree-of-freedom pendulous gimbal in addition to the internal fluid suspension. The SE-25 sensor mounts on top of the sensor PC board, while the SE-10 is suspended beneath it. The sensor PC board can be separated as much as 122 centimeters (48 in) from the detachable electronics PC board with an optional cable. Ad- ditional technical specifications are given in Table II. 2.4 Active Beacons Active beacon navigation systems are the most common navigation aids on ships and air- planes, as well as on commercial mobile robot systems. Active beacons can be detected reliably and provide accurate positioning information with minimal processing. As a result, this approach allows high sampling rates and yields high reliability, but it does also incur high cost in installa- tion and maintenance. Accurate mounting of beacons is required for accurate positioning. Two different types of active beacon systems can be distinguished: trilateration and triangulation. 2.4.1 Trilateration Trilateration is the determination of a vehicle's position based on distance measurements to known beacon sources. In trilateration navigation systems there are usually three or more trans- mitters mounted at known locations in the environment and one receiver on board the robot. Conversely, there may be one transmitter on board and the receivers are mounted on the walls. Using time-of-flight information, the system computes the distance between the stationary trans- mitters and the onboard receiver. Global Positioning Systems (GPS), discussed in Section 2.5, are an example of trilateration. 8
9 . 2.4.2 Triangulation Y y S1 In this configuration there are three or more active transmitters S2 mounted at known locations, as shown in Figure 6. A rotating sensor on board the robot regis- Robot ters the angles 1, 2, and 3 at orientation (unknown) which it “sees” the transmitter beacons relative to the vehicle's Yo x R longitudinal axis. From these three measurements the unknown x- and y- coordinates and the un- S3 known vehicle orientation can be computed. One problem with this X0 X \book\course9.ds4; .wmf 07/19/95 configuration is that in order to Figure 6: The basic triangulation problem: a rotating sensor be seen at distances of, say, 20 head measures the three angles λ1, λ2, and λ3 between the meters or more, the active bea- vehicle's longitudinal axes and the three sources S1, S2, and S3. cons must be focused within a cone-shaped propagation pattern. As a result, beacons are not visible in many areas, a problem that is particularly grave because at least three beacons must be visible for triangulation. Cohen and Koss [1992] performed a detailed analysis on three-point triangulation algorithms and ran computer simulations to verify the performance of different algorithms. The results are summarized as follows: & The Geometric Triangulation method works consistently only when the robot is within the triangle formed by the three beacons. There are areas outside the beacon triangle where the geometric approach works, but these areas are difficult to determine and are highly dependent on how the angles are defined. & The Geometric Circle Intersection method has large errors when the three beacons and the robot all lie on, or close to, the same circle. & The Newton-Raphson method fails when the initial guess of the robot's position and orienta- tion is beyond a certain bound. & The heading of at least two of the beacons was required to be greater than 90 degrees. The an- gular separation between any pair of beacons was required to be greater than 45 degrees. In summary, it appears that none of the above methods alone is always suitable, but an intelli- gent combination of two or more methods helps overcome the individual weaknesses. . 2.4.3 Specific Triangulation Systems Because of their technical maturity and commercial availability, optical triangulation-systems are widely used mobile robotics applications. Typically these systems involve some type of scan- 9
10 . ning mechanism operating in conjunction with fixed-location references strategically placed at predefined locations within the operating environment. A number of variations on this theme are seen in practice [Everett, 1995]: (a) scanning detectors with fixed active beacon emitters, (b) scanning emitter/detectors with passive retroreflective targets, (c) scanning emitter/detectors with active transponder targets, and (d) rotating emitters with fixed detector tar- gets. Example: MTI Research CONAC™ A similar type system using a predefined network of fixed-location detectors is made by MTI Research, Inc., Chelmsford, MA [MTI]. MTI's Computerized Opto- electronic Navigation and Control (CONAC™) is a navigational referencing system employing a vehicle-mounted laser unit called STRuctured Opto- electronic Acquisition Beacon (STROAB), as shown in Figure 7. The scanning laser beam is spread vertically to eliminate critical alignment, allowing the receiv- ers, called Networked Opto-electronic Acquisition Datums (NOADs) (see Figure 8), to be mounted at arbitrary heights as illustrated in Figure 9. Detection of incident illumination by a NOAD triggers a response over the network to a host PC, which in turn calculates the implied angles 1 and 2. An index sensor built into the STROAB gen- erates a rotation reference pulse to facilitate heading measure- ment. Indoor accuracy is on the order of centimeters or millime- ters, and better than 0.1o for heading. The reference NOADs are installed at known locations throughout the area of interest. STROAB acquisition range is sufficient to allow three NOADS to cover an area of 33,000 m if no interfering structures block the view. Additional NOADS may be employed to increase fault tolerance and minimize ambigui- Figure 7: A single STROAB ties when two or more robots are operating in close proximity. beams a vertically spread laser The optimal set of three NOADS is dynamically selected by the signal while rotating at 3,000 host PC, based on the current location of the robot and any pre- rpm. (Courtesy of MTI Research defined visual barriers. A short video clip showing the CONAC Inc.) system in operation is included in [Borenstein et al., 1996b]). Stationary NOADs 3000+ rpm α2 α1 α3 Cable link radio link to host PC Optional Mobile heading data STROAB link Laser line projection Figure 8: Stationary NOADs are located at known positions; at TM least two NOADs are networked Figure 9: The CONAC system employs an onboard, rapidly and connected to a PC. (Courtesy rotating and vertically spread laser beam, which sequentially of MTI Research, Inc.) contacts the networked detectors. (Courtesy of MTI Research, Inc.) 10
11 . 2.5 Global Positioning Systems The Global Positioning System (GPS) is a revolutionary technology for outdoor navigation. GPS was developed as a Joint Services Program by the Department of Defense. The system comprises 24 satellites (including three spares) which transmit encoded RF signals. Using ad- vanced trilateration methods, ground-based receivers can compute their position by measuring the travel time of the satellites' RF signals, which include information about the satellites' mo- mentary location. Knowing the exact distance from the ground receiver to three satellites theo- retically allows for calculation of receiver latitude, longitude, and altitude. The US government deliberately applies small errors in timing and satellite position to prevent a hostile nation from using GPS in support of precision weapons delivery. This intentional deg- radation in positional accuracy to around 100 meters (328 ft) worst case is termed selective avail- ability (SA) [Gothard et al., 1993]. Selective availability has been on continuously (with a few exceptions) since the end of Operation Desert Storm. It was turned off during the war from August 1990 until July 1991 to improve the accuracy of commercial hand-held GPS receivers used by coalition ground forces. At another occasion (October 1992) SA was also turned off for a brief period while the Air Force was conducting tests. Byrne [1993] conducted tests at that time to compare the accuracy of GPS with SA turned on and off. The static measurements of the GPS error as a function of time (shown in Figure 10) were taken before the October 1992 test, i.e., with SA “on” (note the slowly varying error in Figure 10, which is caused by SA). By contrast, Figure 11 shows measurements from the October 1992 period when SA was briefly “off.” The effect of SA can be essentially eliminated through use of a practice known as differential GPS (DGPS). The concept is based on the premise that a second GPS receiver in fairly close proximity (i.e., within 10 km & 6.2 mi) to the first will experience basically the same error ef- fects when viewing the same reference satellites. If this second receiver is fixed at a precisely surveyed location, its calculated solution can be compared to the known position to generate a composite error vector representative of prevailing conditions in that immediate locale. This differential correction can then be passed to the first receiver to null out the unwanted effects, effec- tively reducing position error for com- mercial systems. Figure 10: Typical GPS static position error with SA “On.” (Courtesy of [Byrne, 1993].) 11
12 . Many commercial GPS receivers are available with differential capability. This, together with the service of some local radio stations that make differen- tial corrections available to subscribers of the service [GPS Report, 1992], makes the use of DGPS possible for many applications. Typical DGPS accu- racies are around 4 to 6 meters (13 to 20 ft), with better performance seen as the distance between the mobile receivers and the fixed reference station is de- creased. For example, the Coast Guard is in the process of implementing differ- ential GPS in all major U.S. harbors, with an expected accuracy of around 1 Figure 11: Typical GPS static position error with SA “Off”. (Courtesy of Byrne [1993]). meter (3.3 ft) [Getting, 1993]. A differ- ential GPS system already in operation at O'Hare International Airport in Chicago has demon- strated that aircraft and service vehicles can be located to 1 meter (3.3 ft) in real-time, while moving. Surveyors use differential GPS to achieve centimeter accuracy, but this practice re- quires significant postprocessing of the collected data [Byrne, 1993]. In 1992 and 1993 Raymond H. Byrne [1993] at the Advanced Vehicle Development Depart- ment, Sandia National Laboratories, Albuquerque, New Mexico conducted a series of in-depth comparison tests with five different GPS receivers. Testing focused on receiver sensitivity, static accuracy, dynamic accuracy, number of satellites tracked, and time-to-first-fix. The more impor- tant parameters evaluated in this test, the static and dynamic accuracy, are summarized below for the Magnavox GPS Engine, a representative of the five receivers tested. Position Accuracy Static position accuracy was measured by placing the GPS receivers at a surveyed location and taking data for approximately 24 hours. The plots of the static position error for the Magna- vox GPS Engine was shown in Figure 10, above. The mean and standard deviation ()) of the po- sition error in this test was 22 meters (72 ft) and 16 meters (53 ft), respectively. Fractional Availability of Signals The dynamic test data was obtained by driving an instrumented van over different types of ter- rain. The various routes were chosen so that the GPS receivers would be subjected to a wide va- riety of obstructions. These include buildings, underpasses, signs, and foliage for the city driving. Rock cliffs and foliage were typical for the mountain and canyon driving. Large trucks, under- passes, highway signs, buildings, foliage, as well as small canyons were found on the interstate and rural highway driving routes. 12
13 . The results of the dynamic 1 0 0.0 99 .0 99 .3 98 .5 94 .6 testing are shown in Figure 12; 9 0 .0 9 1 .2 the percentages have the fol- 8 0 .0 lowing meaning: 7 0 .0 % N o N a vig atio n % 2 -D N a vig a tio n No Navigation &Not enough 6 0 .0 % 3 -D N a vig a tio n 5 0 .0 satellites were in sight to permit 4 0 .0 positioning. 3 0 .0 2-D Navigation & Enough 2 0 .0 satellites were in sight to de- 1 0 .0 3 .4 5 .3 4 .4 0.0 1 .0 1 .1 0.4 0 .4 0.3 1 .3 termine the x- and y- 0.0 C ity M o u n ta in C an yon In te rsta te R u ra l coordinates of the vehicle. d rivin g drivin g drivin g h ig h w a y d rivin g h ig h w a y d riving p6 4fig1 5 .x ls , .c dr5 , .w m f Te s tin g E n v iro n m e n t 3-D Navigation &Optimal Figure 12: Summary of dynamic environment performance for the data available. System could Magnavox GPS Engine. (Courtesy of Byrne [1993]). determine x-, y-, and z-coordinates of the vehicle. In summary one can conclude that GPS is a tremendously powerful tool for many outdoor navigation tasks. The problems associated with using GPS for mobile robot navigation are: (a) periodic signal blockage due to foliage and hilly terrain, (b) multi-path interference, and (c) insufficient position accuracy for primary (stand-alone) navigation systems. 2.6 Landmark Navigation Landmarks are distinct features that a robot can recognize from its sensory input. Landmarks can be geometric shapes (e.g., rectangles, lines, circles), and they may include additional infor- mation (e.g., in the form of bar-codes). In general, landmarks have a fixed and known position, relative to which a robot can localize itself. Landmarks are carefully chosen to be easy to iden- tify; for example, there must be sufficient contrast relative to the background. Before a robot can use landmarks for navigation, the characteristics of the landmarks must be known and stored in the robot's memory. The main task in localization is then to recognize the landmarks reliably and to calculate the robot's position. In order to simplify the problem of landmark acquisition it is often assumed that the current robot position and orientation are known approximately, so that the robot only needs to look for landmarks in a limited area. For this reason good odometry accuracy is a prerequisite for success- ful landmark detection. Some approaches fall between landmark and map-based positioning (see Section 2.7). They use sensors to sense the environment and then extract distinct structures that serve as landmarks for navigation in the future. Our discussion in this section addresses two types of landmarks: “artificial” and “natural” landmarks. It is important to bear in mind that “natural” landmarks work best in highly structured environments such as corridors, manufacturing floors, or hospitals. Indeed, one may argue that 13
14 .“natural” landmarks work best when they are actually man-made (as is the case in highly struc- tured environments). For this reason, we shall define the terms “natural landmarks” and “artifi- cial landmarks” as follows: natural landmarks are those objects or features that are already in the environment and have a function other than robot navigation; artificial landmarks are specially designed objects or markers that need to be placed in the environment with the sole purpose of enabling robot navigation. 2.6.1 Natural Landmarks The main problem in natural landmark navigation is to detect and match characteristic fea- tures from sensory inputs. The sensor of choice for this task is computer vision. Most computer vision-based natural landmarks are long vertical edges, such as doors, wall junctions, and ceiling lights (see TRC video clip in [Borenstein et al., 1996b]). When range sensors are used for natural landmark navigation, distinct signatures, such as those of a corner or an edge, or of long straight walls, are good feature candidates. The selection of features is important since it will determine the complexity in feature description, detection, and matching. Proper selection of features will also reduce the chances for ambiguity and in- crease positioning accuracy. Example: AECL's ARK Project One system that uses natural landmarks was developed jointly by the Atomic Energy of Can- ada Ltd (AECL) and Ontario Hydro Technologies with support from the University of Toronto and York University [Jenkin et al., 1993]. This proj- ect aimed at developing a sophisticated robot system called the “Autonomous Robot for a Known Environment” (ARK). The navigation module of the ARK robot is shown in Figure 13. The module consists of a custom-made pan-and-tilt table, a CCD camera, and an eye-safe IR spot laser rangefinder. Two VME-based cards, a single-board computer, and a microcontroller provide processing power. The Figure 13: The ARK's natural landmark navigation module is used to periodically correct navigation system uses a CCD camera and a the robot's accumulating odometry errors. The time-of-flight laser rangefinder to identify system uses natural landmarks such as alphanu- landmarks and to measure the distance between meric signs, semi-permanent structures, or door- landmark and robot. (Courtesy of Atomic Energy of Canada Ltd.) ways. The only criteria used is that the landmark be distinguishable from the background scene by color or contrast. The ARK navigation module uses an interesting hybrid approach: the system stores (learns) landmarks by generating a three-dimensional “gray-level surface” from a single training image 14
15 .obtained from the CCD camera. A coarse, registered range scan of the same field of view is per- formed by the laser rangefinder, giving depths for each pixel in the gray-level surface. Both pro- cedures are performed from a known robot position. Later, during operation, when the robot is at an approximately known (from odometry) position within a couple of meters from the training position, the vision system searches for those landmarks that are expected to be visible from the robot's momentary position. Once a suitable landmark is found, the projected appearance of the landmark is computed. This expected appearance is then used in a coarse-to-fine normalized cor- relation-based matching algorithm that yields the robot's relative distance and bearing with regard to that landmark. With this procedure the ARK can identify different natural landmarks and measure its position relative to the landmarks. A video clip showing the ARK system in opera- tion is included in [Borenstein et al., 1996b]). 2.6.2 Artificial Landmarks Detection is much easier with artificial landmarks [Atiya and Hager, 1993], which are de- signed for optimal contrast. In addition, the exact size and shape of artificial landmarks are known in advance. Size and shape can yield a wealth of geometric information when transformed under the perspective projection. Researchers have used different kinds of patterns or marks, and the geometry of the method and the associated techniques for position estimation vary accordingly [Talluri and Aggarwal, 1993]. Many artificial landmark positioning systems are based on computer vision. We will not discuss these systems in detail, but will mention some of the typical landmarks used with com- puter vision.Fukui [1981] used a diamond-shaped landmark and applied a least-squares method to find line segments in the image plane. Other systems use reflective material patterns and strobed light to ease the segmentation and parameter extraction [Lapin, 1992; Mesaki and Ma- suda, 1992]. There are also systems that use active (i.e., LED) patterns to achieve the same effect [Fleury and Baron, 1992]. The accuracy achieved by the above methods depends on the accuracy with which the geomet- ric parameters of the landmark images are extracted from the image plane, which in turn depends on the relative position and angle between the robot and the landmark. In general, the accuracy decreases with the increase in relative distance. Normally there is a range of relative angles in which good accuracy can be achieved, while accuracy drops significantly once the relative angle moves out of the “good” region. There is also a variety of landmarks used in conjunction with non-vision sensors. Most often used are bar-coded reflectors for laser scanners. For example, work on the Mobile Detection As- sessment and Response System (MDARS) [Everett et al., 1994; DeCorte, 1994; Everett 1995] uses retro-reflectors, and so does the commercially available system from Caterpillar on their Self-Guided Vehicle [Gould, 1990; Byrne et al., 1992]. The shape of these landmarks is usually unimportant. By contrast, a unique approach taken by Feng et al. [1992] used a circular landmark and applied an optical Hough transform to extract the parameters of the ellipse on the image plane in real time. We summarize the characteristics of landmark-based navigation as follows: 15
16 .& Natural landmarks offer flexibility and require no modifications to the environment. & Artificial landmarks are inexpensive and can have additional information encoded as patterns or shapes. & The maximal effective distance between robot and landmark is substantially shorter than in active beacon systems. & The positioning accuracy depends on the distance and angle between the robot and the land- mark. Landmark navigation is rather inaccurate when the robot is further away from the land- mark. A higher degree of accuracy is obtained only when the robot is near a landmark. & Substantially more processing is necessary than with active beacon systems. In many cases onboard computers cannot process natural landmark algorithms quickly enough for real-time motion. & Ambient conditions, such as lighting, can be problematic; in marginal visibility landmarks may not be recognized at all or other objects in the environment with similar features can be mistaken for a legitimate landmark. This is a serious problem because it may result in a com- pletely erroneous determination of the robot’s position. & Landmarks must be available in the work environment around the robot. & Landmark-based navigation requires an approximate starting location so that the robot knows where to look for landmarks. If the starting position is not known, the robot has to conduct a time-consuming search process. This search process may go wrong and may yield an errone- ous interpretation of the objects in the scene. & A database of landmarks and their location in the environment must be maintained. & There is only limited commercial support for natural landmark-based techniques. 2.7 Map-based Positioning Map-based positioning, also known as “map matching,” is a technique in which the robot uses its sensors to create a map of its local environment. This local map is then compared to a global map previously stored in memory. If a match is found, then the robot can compute its actual po- sition and orientation in the environment. The pre-stored map can be a CAD model of the envi- ronment, or it can be constructed from prior sensor data. Map-based positioning is advantageous because it uses the naturally occurring structure of typical indoor environments to derive position information without modifying the environment. Also, with some of the algorithms being devel- oped, map-based positioning allows a robot to learn a new environment and to improve posi- tioning accuracy through exploration. Disadvantages of map-based positioning are the stringent requirements for accuracy of the sensor map, and the requirement that there be enough stationary, easily distinguishable features that can be used for matching. Because of the challenging re- quirements currently most work in map-based positioning is limited to laboratory settings and to relatively simple environments. 2.7.1 Map Building There are two fundamentally different starting points for the map-based positioning process. Either there is a pre-existing map, or the robot has to build its own environment map. Rencken 16
17 . [1993] defined the map building problem as the following: “Given the robot's position and a set of measurements, what are the sensors seeing?” Obviously, the map-building ability of a robot is closely related to its sensing capacity. A problem related to map-building is “autonomous explo- ration” [Rencken, 1994]. In order to build a map, the robot must explore its environment to map uncharted areas. Typically it is assumed that the robot begins its exploration without having any knowledge of the environment. Then, a certain motion Figure 14: A typical scan of a strategy is followed which aims at maximizing the amount of room, produced by the University charted area in the least amount of time. Such a motion strategy of Kaiserslautern's in-house is called “exploration strategy,” and it depends strongly on the developed lidar system. (Courtesy of the University of kind of sensors used. One example for a simple exploration Kaiserslautern.) strategy based on a lidar sensor is given by [Edlinger and Puttkamer, 1994]. Many researchers believe that no single sensor modality alone can adequately capture all rele- vant features of a real environment. To overcome this problem, it is necessary to combine data from different sensor modalities, a process known as sensor fusion. For example, Buchberger et al. [1993] and JCrg [1994; 1995] developed a mechanism that utilizes heterogeneous information obtained from a laser-radar and a sonar system in order to construct reliable and complete world models. Sensor fusion is an active research area, and the literature is replete with techniques that combine various types of sensor data. 2.7.2 Map Matching One of the most important and challenging aspects of map-based navigation is map matching, i.e., establishing the correspondence between a current local map and a stored global map [Kak et al., 1990]. Work on map matching in the computer vision community is often focused on the general problem of matching an image of arbitrary position and orientation relative to a model (e.g., [Talluri and Aggarwal, 1993]). In general, matching is achieved by first extracting features, followed by determination of the correct correspondence between image and model features, usually by some form of constrained search [Cox, 1991]. A discussion of two different classes of matching algorithms, “icon-based” and “feature-based,” are given in [Schaffer et al., 1992]. Example: University of Kaiserslautern's Angle Histogram A simple but apparently very effective method for map-building was developed by Hinkel and Knieriemen [1988] from the University of Kaiserslautern, Germany. This method, called the “Angle Histogram,” used an in-house developed lidar. A typical scan from this lidar is shown in Figure 14. The angle histogram method works as follows. First, a 360-degree scan of the room is taken with the lidar, and the resulting “hits” are recorded in a map. Then the algorithm measures the relative angle between any two adjacent hits (see Figure 15). After compensating for noise in 17
18 .the readings (caused by the inaccu- racies in position between adjacent hits), the angle histogram shown in Figure 16(a) can be built. The uni- form direction of the main walls are clearly visible as peaks in the angle histogram. Computing the histo- gram modulo % results in only two weiss00.ds4, .wmf main peaks: one for each pair of x parallel walls. This algorithm is Figure 15: Calculating angles for the angle histogram. very robust with regard to openings (Courtesy of [Weiß et al., 1994].) in the walls, such as doors and windows, or even cabinets lining the walls. After computing the angle histogram, all angles of the hits can be normalized, resulting in the representation shown in Figure 16b. After this transformation, two additional histograms, one for the x- and one for the y-direction can be constructed. This time, peaks show the distance to the walls in x and y direction. Hinkel and Knieriemen's original algorithms have been further refined over the past years (e.g., Wei et al. [1994]) and the Angle Histogram method is now said to yield a reliable accuracy of 0.5(). Example 2: Siemens' Roamer Rencken [1993; 1994] at the Siemens Corporate Research and Development Center in Mu- nich, Germany, has made substantial contributions toward solving the boot strap problem result- ing from the uncertainty in position and environment. This problem exists when a robot must move around in an unknown environment, with uncertainty in its odometry-derived position. For Y B Y C A E b. a. F G D 20o n n X n B+E+G A+C D pos10rpt.ds4, .wmf X 2 Figure 16: Readings from a rotating laser scanner generate the contours of a room. a. The angle histogram allows the robot to determine its orientation relative to the walls. b. After normalizing the orientation of the room relative to the robot, an x-y histogram can be built from the same data points. (Adapted from [Hinkel and Knieriemen, 1988].) 18
19 .example, when building a map of the environment, all measurements are necessarily relative to the carrier of the sensors (i.e., the mobile robot). Yet, the position of the robot itself is not known exactly, because of the errors accumulating in odometry. Rencken addresses the problem as follows: in order to represent features “seen” by its 24 ul- trasonic sensors, the robot constructs hypotheses about these features. To account for the typi- cally unreliable information from ultrasonic sensors, features can be classified as hypothetical, tentative, or confirmed. Once a feature is confirmed, it is used for constructing the map. Before the map can be updated, though, every new data point must be associated with either a plane, a corner, or an edge (and some variations of these features). Rencken devises a “hypothesis tree” which is a data structure that allows tracking of different hypotheses until a sufficient amount of data has been accumulated to make a final decision. 3. CONCLUSIONS This paper presented an overview over existing sensors and techniques for mobile robot posi- tioning. We defined seven categories for these sensors and techniques, but obviously other ways for organizing the subject are possible. The foremost conclusion we could draw from reviewing the vast body of literature was that for indoor mobile robot navigation no single, elegant solution exists. For outdoor navigation GPS is promising to become the universal navigation solution for almost all automated vehicle systems. Unfortunately, an indoor equivalent to GPS is difficult to realize because none of the currently existing RF-based trilateration systems work reliably indoors. If line-of sight between stationary and onboard components can be maintained, then RF-based solutions can work indoors as well. However, in that case optical components using triangulation are usually less expensive. The market seems to have adopted this thought some time ago, as can be seen in the relatively large number of commercially available navigation systems that are based on optical triangulation (as discussed in Section 2.4.3). Despite the variety of powerful existing systems and techniques, we believe that mobile ro- botics is still in need for a particularly elegant and universal indoor navigation method. Such a method will likely bring scientific recognition and commercial success to its inventor. 19
20 . Appendix A: Tabular comparison of Positioning Systems System & Description Features Accuracy – Accuracy – Effective Reference o Position [mm] orientation [ ] Range Odometry on TRC LabMate, after UMBmark calibra- 4×4 meters square path: Smooth floor: Unlimited [Borenstein and Feng, tion. Wheel-encoder resolution: 0.012 mm linear travel smooth floor: 30 mm, 1-2o. 1995] per pulse 10 bumps: 500 mm With 10 bumps: 8o CLAPPER and OmniMate: 4×4 m square path: smooth floor: <1o Unlimited [Borenstein, 1995; Dual-drive robot with internal correction of odometry. smooth floor: ~20 mm 10 bumps: <1o 1996] Made from two TRC LabMates, connected by compliant 10 bumps: ~40 mm linkage. Uses 2 abs. rotary encoders, 1 linear encoder. Complete inertial navigation system including ENV-O5S Position drift rate: Drift: 5-0.25º/s. Unlimited [Barshan and Dur- Gyrostar solid state rate gyro, START solid state gyro, 1-8 cm/s depending on frequency After compensation rant-Whyte, 1993; triaxial linear accelerometer and 2 inclinometers of acceleration change drift 0.0125º/s 1995] Andrew Autogyro and Autogyro Navigator. Quoted Not applicable Drift: 0.005º/s Unlimited [ANDREW] minimum detectable rotation rate: ±0.02º/s. Actual minimum detectable rate limited by deadband after A/D conversion: 0.0625º/s. Cost: $1000 KVH Fluxgate Compass. Includes microprocessor- Not applicable Resolution: ±0.5º Unlimited [KVH] controlled fluxgate sensor subsystem. Cost <$700 Accuracy: ±0.5º Repeatability: ±0.2º Measures both Indoor ±1.3 mm Indoor and > 100 m [McLeod, 1993]; CONAC™ (computerized opto- angle and distance outdoor ±5 mm outdoor ±0.05º [MTI] electronic navigation and control). to target Cost: $6,000. Global Positioning Systems (GPS). order of 20 m during motion, order Not applicable Unlimited Different vendors Cost: $1,000 - $5,000. of centimeters when standing for minutes Landmark Navigation <5 cm < 1 deg ~10 m Different research projects Model Matching (map-based posi- order of 1-10 cm order of 1-3 deg ~10 m Different research tioning) projects 20
21 .Acknowledgment: Parts of this research was funded by Department of Energy Grant DE-FG02-86NE37969. Parts of the text were adapted from [Borenstein et al., 1996; Everett, 1995; Byrne, 1993]. 4. REFERENCES 1. Atiya, S. and Hager, G., 1993, “Real-time Vision-based Robot Localization.” IEEE Transac- tions on Robotics and Automation, Vol. 9, No. 6, pp. 785-800. 2. Barshan, B. and Durrant-Whyte, H.F., 1993, “An Inertial Navigation System for a Mobile Robot.” Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robotics and Systems, Yokohama, Japan, July 26-30, pp. 2243-2248. 3. Barshan, B. and Durrant-Whyte, H.F., 1995, “Inertial Navigation Systems Mobile Robots.” IEEE Transactions on Robotics and Automation, Vol. 11, No. 3, June, pp. 328-342. 4. Battin, R. H., 1987, “An Introduction to the Mathematics and Methods of Astrodynamics.” AIAA Education Series, New York, NY, ISBN 0-930403-25-8. 5. Borenstein, J. and Feng, L., 1994, “UMBmark & A Method for Measuring, Comparing, and Correcting Dead-reckoning Errors in Mobile Robots.” Technical Report, The University of Michigan UM-MEAM-94-22, December. 6. Borenstein, J., 1995, “Internal Correction of Dead-reckoning Errors With the Compliant Linkage Vehicle.” Journal of Robotic Systems, Vol. 12, No. 4, April 1995, pp. 257-273. 7. Borenstein, J., 1995v, “The CLAPPER: A Dual-drive Mobile Robot With Internal Correction of Dead-reckoning Errors.” Video Proceedings of the 1995 IEEE International Conference on Robotics and Automation, Nagoya, Japan, May 21-27, 1995. 8. Borenstein, J. and Feng. L., 1995, “UMBmark: A Benchmark Test for Measuring Dead- reckoning Errors in Mobile Robots.” 1995 SPIE Conference on Mobile Robots, Philadelphia, October 22-26. 9. Borenstein, J. and Feng. L., 1996, “Measurement and Correction of Systematic Odometry Errors in Mobile Robots.” IEEE Journal of Robotics and Automation, Vol 12, No 5, October. 10. Borenstein, J., Everett, B., and Feng, L., 1996a, “Navigating Mobile Robots: Systems and Techniques.” A. K. Peters, Ltd., Wellesley, MA, ISBN 1-56881-058-X. 11. Borenstein, J., Everett, B., and Feng, L., 1996b, “Navigating Mobile Robots: Systems and Techniques.” CD-ROM Edition, A. K. Peters, Ltd., Wellesley, MA, ISBN 1-56881-058-X. 12. Borenstein, J., 1996, "Experimental Results from Internal Odometry Error Correction With the OmniMate Mobile Platform." Submitted to the IEEE Transactions on Robotics and Automation, July 1996. 13. Buchberger, M., JCrg, K., and Puttkamer, E., 1993, “Laserradar and Sonar Based World Modeling and Motion Control for Fast Obstacle Avoidance of the Autonomous Mobile Ro- bot MOBOT-IV.” Proceedings of IEEE International Conference on Robotics and Automa- tion, Atlanta, GA, May 10-15, pp. 534-540. 14. Byrne, R.H., Klarer, P.R., and Pletta, J.B., 1992, “Techniques for Autonomous Navigation.” Sandia Report SAND92-0457, Sandia National Laboratories, Albuquerque, NM, March. 15. Byrne, R.H., 1993, “Global Positioning System Receiver Evaluation Results.” Sandia Report SAND93-0827, Sandia National Laboratories, Albuquerque, NM, Sept. 21
22 .16. Chenavier, F. and Crowley, J., 1992, “Position Estimation for a Mobile Robot Using Vision and Odometry.” Proceedings of IEEE International Conference on Robotics and Automation, Nice, France, May 12-14, pp. 2588-2593. 17. Cohen, C. and Koss, F., 1992, “A Comprehensive Study of Three Object Triangulation.” Proceedings of the 1993 SPIE Conference on Mobile Robots, Boston, MA, Nov. 18-20, pp. 95-106. 18. Cox, I.J., 1991, “Blanche - An Experiment in Guidance and Navigation of an Autonomous Mobile Robot.” IEEE Transactions Robotics and Automation, 7(3), pp. 193-204. 19. Dahlin, T. and Krantz, D., 1988, “Low-Cost, Medium-Accuracy Land Navigation System.” Sensors, Feb., pp. 26-34. 20. DeCorte, C., 1994, “Robots Train for Security Surveillance.” Access Control, June, pp. 37- 38. 21. Edlinger, T. and Puttkamer, E., 1994, “Exploration of an Indoor Environment by an Autono- mous Mobile Robot.” International Conference on Intelligent Robots and Systems (IROS '94). Munich, Germany, Sept. 12-16, pp. 1278-1284. 22. Everett, H.R., Gage, D.W., Gilbreth, G.A., Laird, R.T., and Smurlo, R.P., 1994, “Real-World Issues in Warehouse Navigation.” Proceedings SPIE Mobile Robots IX, Volume 2352, Bos- ton, MA, Nov.2-4. 23. Everett, H. R., 1995, Sensors for Mobile Robots: Theory and Application, A K Peters, Ltd., Wellesley, MA, ISBN 1-56881-048-2. 24. Farrell, J. L., 1976, “Integrated Aircraft Navigation.” Academic Press, New York, NY, ISBN 0-12-249750-3. 25. Feng, L., Fainman, Y., and Koren, Y., 1992, “Estimate of Absolute Position of Mobile Sys- tems by Opto-electronic Processor,” IEEE Transactions on Man, Machine and Cybernetics, Vol. 22, No. 5, pp. 954-963. 26. Fleury, S. and Baron, T., 1992, “Absolute External Mobile Robot Localization Using a Single Image.” Proceedings of the 1992 SPIE Conference on Mobile Robots, Boston, MA, Nov. 18- 20, pp. 131-143. 27. Fukui, I., 1981, “TV Image Processing to Determine the Position of a Robot Vehicle.” Pat- tern Recognition, Vol. 14, pp. 101-109. 28. Getting, I.A., 1993, “The Global Positioning System,” IEE Spectrum, December, pp. 36-47. 29. Gothard, B.M., Etersky, R.D., and Ewing, R.E., 1993, “Lessons Learned on a Low-Cost Global Navigation System for the Surrogate Semi-Autonomous Vehicle.” SPIE Proceedings, Vol. 2058, Mobile Robots VIII, pp. 258-269. 30. Gould, L., 1990, “Is Off-Wire Guidance Alive or Dead?” Managing Automation, May, pp. 38-40. 31. GPS Report. November 5, 1992. Potomac, MD: Phillips Business Information. 32. Hinkel, R. and Knieriemen, T., 1988, “Environment Perception with a Laser Radar in a Fast Moving Robot.” Symposium on Robot Control 1988 (SYROCO '88), Karlsruhe, Germany, October 5-7, pp. 68.1 - 68.7. 33. Jenkin, M., Milios, E., Jasiobedzki, P., Bains, N., and Tran, K., 1993, “Global Navigation for ARK.” Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robotics and Systems, Yokohama, Japan, July 26-30, pp. 2165-2171. 22
23 .34. JCrg, K.W., 1994, “Echtzeitf#hige Multisensorintegration fKr autonome mobile Roboter.” ISBN 3-411-16951-6, B.I. Wissenschaftsverlag, Mannheim, Leipzig, Wien, ZKrich. 35. JCrg, K.W., 1995, “World Modeling for an Autonomous Mobile Robot Using Heterogenous Sensor Information.” Robotics and Autonomous Systems, Vol. 14, pp. 159-170. 36. Kak, A., Andress, K., Lopez-Abadia, and Carroll, M., 1990, “Hierarchical Evidence Accu- mulation in the PSEIKI System and Experiments in Model-driven Mobile Robot Naviga- tion.” Uncertainty in Artificial Intelligence, Vol. 5, Elsevier Science Publishers B. V., North- Holland, pp. 353-369. 37. Lapin, B., 1992, “Adaptive Position Estimation for an Automated Guided Vehicle.” Proc. of the 1992 SPIE Conference on Mobile Robots, Boston, MA, Nov. 18-20, pp. 82-94. 38. Mesaki, Y. and Masuda, I., 1992, “A New Mobile Robot Guidance System Using Optical Reflectors.” Proceedings of the 1992 IEEE/RSJ International Conference on Intelligent Ro- bots and Systems, Raleigh, NC, July 7-10, pp. 628-635. 39. Rencken, W.D., 1993, “Concurrent Localization and Map Building for Mobile Robots Using Ultrasonic Sensors.” Proceedings of the 1993 IEEE/RSJ International Conference on Intelli- gent Robotics and Systems, Yokohama, Japan, July 26-30, pp. 2192-2197. 40. Rencken, W.D., 1994, “Autonomous Sonar Navigation in Indoor, Unknown, and Unstruc- tured Environments.”1994 International Conference on Intelligent Robots and Systems (IROS '94). Munich, Germany, Sept. 12-16, pp. 127-134. 41. Talluri, R., and Aggarwal, J., 1993, “Position Estimation Techniques for an Autonomous Mobile Robot - a Review.” in Handbook of Pattern Recognition and Computer Vision, World Scientific: Singapore, Chapter 4.4, pp. 769-801. 42. Wei, G., Wetzler, C., and Puttkamer, E., 1994, “Keeping Track of Position and Orientation of Moving Indoor Systems by Correlation of Range-Finder Scans.” 1994 International Con- ference on Intelligent Robots and Systems (IROS 94), Munich, Germany, Sept. 12-16, pp. 595-601. Commercial Companies 43. ANDREW Andrew Corporation, 10500 W. 153rd Street, Orland Park, IL 60462. 708-349- 5294 or 708-349-3300. 44. DBIR - Denning Branch International Robotics, 1401 Ridge Avenue, Pittsburgh PA 15233, 412-322-4412. 45. KVH - KVH Industries, C100 Compass Engine Product Literature, 110 Enterprise Center, Middletown, RI 02840, 401-847-3327. 46. MTI - MTI Research, Inc., 313 Littleton Road, Chelmsford, MA 01824., 508-250-4949. 47. TRC - Transitions Research Corp. (now under new name: “HelpMate Robotics Inc. – HRI”), Shelter Rock Lane, Danbury, CT 06810, 203-798-8988. 23