116
1 University College of Science, OU 2014 Remote Sensing and GIS Haroon Hairan

Gis and remote sensings

Embed Size (px)

DESCRIPTION

Gis and remote sensings

Citation preview

Page 1: Gis and remote sensings

1

University College of Science, OU

2014

Remote Sensing and GIS

Haroon Hairan

Page 2: Gis and remote sensings

UNIT-IWhat is Remote Sensing?

We perceive the surrounding world through our five senses. Some senses (touch and taste) require contact of our sensing organs with the objects. However, we acquire much information about our surrounding through the senses of sight and hearing which do not require close contact between the sensing organs and the external objects. In another word, we are performing Remote Sensing all the time.Generally, Remote sensing refers to the activities of recording/observing/perceiving (sensing) objects or events at far away (remote) places. In remote sensing, the sensors are not in direct contact with the objects or events being observed. The information needs a physical carrier to travel from the objects/events to the sensors through an intervening medium. The electromagnetic radiation is normally used as an information carrier in remote sensing. The output of a remote sensing system is usually an image representing the scene being observed. A further step of image analysis and interpretation is required in order to extract useful information from the image. The human visual system is an example of a remote sensing system in this general sense.In a more restricted sense, remote sensing usually refers to the technology of acquiring information about the earth's surface (land and ocean) and atmosphere using sensors onboard airborne (aircraft, balloons) or space borne (satellites, space shuttles) platforms.Satellite Remote SensingIn this CD, you will see many remote sensing images around Asia acquired by earth observation satellites. These remote sensing satellites are equipped with sensors looking down to the earth. They are the "eyes in the sky" constantly observing the earth as they go round in predictable orbits.Effects of AtmosphereIn satellite remote sensing of the earth, the sensors are looking through a layer of atmosphere separating the sensors from the Earth's surface being observed. Hence, it is essential to understand the effects of atmosphere on the electromagnetic radiation travelling from the Earth to the sensor through the atmosphere. The atmospheric constituents cause wavelength dependent absorption and scattering of radiation. These effects degrade the quality of images. Some of the atmospheric effects can be corrected before the images are subjected to further analysis and interpretation.A consequence of atmospheric absorption is that certain wavelength bands in the electromagnetic spectrum are strongly absorbed and effectively blocked by the atmosphere. The wavelength regions in the electromagnetic spectrum usable for remote sensing are determined by their ability to penetrate atmosphere. These regions are known as the atmospheric transmission windows. Remote sensing systems are often designed to operate within one or more of the atmospheric windows. These windows exist in the microwave region, some wavelength bands in the infrared, the entire visible region and part of the near ultraviolet regions. Although the atmosphere is practically transparent to

2

Page 3: Gis and remote sensings

x-rays and gamma rays, these radiations are not normally used in remote sensing of the earth.Optical and Infrared Remote SensingIn Optical Remote Sensing, optical sensors detect solar radiation reflected or scattered from the earth, forming images resembling photographs taken by a camera high up in space. The wavelength region usually extends from the visible and near infrared (commonly abbreviated as VNIR) to the short-wave infrared (SWIR).

Different materials such as water, soil, vegetation, buildings and roads reflect visible and infrared light in different ways. They have different colours and brightness when seen under the sun. The inter pretation of optical images require the knowledge of the spectral reflectance signatures of the various materials (natural or man-made) covering the surface of the earth.There are also infrared sensors measuring the thermal infrared radiation emitted from the earth, from which the land or sea surface temperature can be derived.Microwave Remote SensingThere are some remote sensing satellites which carry passive or active microwave sensors. The active sensors emit pulses of microwave radiation to illuminate the areas to be imaged. Images of the earth surface are formed by measuring the microwave energy scattered by the ground or sea back to the sensors. These satellites carry their own "flashlight" emitting microwaves to illuminate their targets. The images can thus be acquired day and night. Microwaves have an additional advantage as they can penetrate clouds. Images can be acquired even when there are clouds covering the earth surface.A microwave imaging system which can produce high resolution image of the Earth is the synthetic aperture radar (SAR). The intensity in a SAR image depends on the amount of microwave backscattered by the target and received by the SAR antenna. Since the physical mechanisms responsible for this backscatter is different for microwave, compared to visible/infrared radiation, the interpretation of SAR images requires the knowledge of how microwaves interact with the targets.Remote Sensing ImagesRemote sensing images are normally in the form of digital images. In order to extract useful information from the images, image processing techniques may be employed to enhance the image to help visual interpretation, and to correct or restore the image if the image has been subjected to geometric distortion, blurring or degradation by other factors. There are many image analysis techniques available and the methods used depend on the requirements of the specific problem concerned. In many cases, image segmentation and classification algorithms are used to delineate different areas in an image into thematic classes. The resulting product is a thematic map of the study area. This thematic map can be combined with other databases of the test area for further analysis and utilization.

Aerial photography

3

Page 4: Gis and remote sensings

Aerial photography is the taking of photographs of the ground from an elevated position. The term usually refers to images in which the camera is not supported by a ground-based structure. Platforms for aerial photography include fixed-wing aircraft, helicopters, multi rotor Unmanned Aircraft Systems (UAS), balloons,  blimps and dirigibles, rockets, kites,  parachutes, stand-alone telescoping and vehicle mounted poles. Mounted cameras may be triggered remotely or automatically; hand-held photographs may be taken by a photographer.

Aerial photography should not be confused with Air-to-Air Photography, where one-or-more aircraft are used as Chase planes that "chase" and photograph other aircraft in flight.

History

Early History

Aerial photography was first practiced by the French photographer and balloonist Gaspard-Félix Tournachon, known as "Nadar", in 1858 over Paris, France. However, the photographs he produced no longer exist and therefore the earliest surviving aerial photograph is titled 'Boston, as the Eagle and the Wild Goose See It.' Taken by James Wallace Black and Samuel Archer King on October 13, 1860, it depicts Boston from a height of 630m.

Kite aerial photography was pioneered by British meteorologist E.D. Archibald in 1882. He used an explosive charge on a timer to take photographs from the air. Frenchman Arthur Batut began using kites for photography in 1888, and wrote a book on his methods in 1890. Samuel Franklin Cody developed his advanced 'Man-lifter War Kite' and succeeded in interesting the British War Office with its capabilities.

The first use of a motion picture camera mounted to a heavier-than-air aircraft took place on April 24, 1909 over Rome in the 3:28 silent film short, Wilbur Wright und seine Flugmaschine.

World War I

The use of aerial photography rapidly matured during the war, as reconnaissance aircraft were equipped with cameras to record enemy movements and defences. At the start of the conflict, the usefulness of aerial photography was not fully appreciated, with reconnaissance being accomplished with map sketching from the air.

Germany adopted the first aerial camera, a Görz, in 1913. The French began the war with several squadrons of Blériot observation aircraft equipped with cameras for reconnaissance. The French Army developed procedures for getting prints into the hands of field commanders in record time.

Frederick Charles Victor Laws started aerial photography experiments in 1912 with the No. 1 Squadron RAF, taking photographs from the British dirigible Beta. He discovered that vertical photos taken with 60% overlap could be used to create a stereoscopic effect when viewed in a stereoscope, thus creating a perception of depth that could aid in cartography and in intelligence derived from aerial images. The Royal Flying Corps recon pilots began to use cameras for recording their observations in 1914 and by the Battle of Neuve 4

Page 5: Gis and remote sensings

Chapelle in 1915, the entire system of German trenches was being photographed. In 1916 the Austro-Hungarian Monarchy made vertical camera axis aerial photos above Italy for map-making.

The first purpose-built and practical aerial camera was invented by Captain John Moore-Brabazon in 1915 with the help of the Thornton-Pickard company, greatly enhancing the efficiency of aerial photography. The camera was inserted into the floor of the aircraft and could be triggered by the pilot at intervals. Moore-Brabazon also pioneered the incorporation of stereoscopic techniques into aerial photography, allowing the height of objects on the landscape to be discerned by comparing photographs taken at different angles.

By the end of the war aerial cameras had dramatically increased in size and focal power and were used increasingly frequently as they proved their pivotal military worth; by 1918 both sides were photographing the entire front twice a day, and had taken over half a million photos since the beginning of the conflict. In January 1918, General Allenby used five Australian pilots from No. 1 Squadron AFC to photograph a 624 square miles (1,620 km2) area in Palestine as an aid to correcting and improving maps of the Turkish front. This was a pioneering use of aerial photography as an aid for cartography. Lieutenants Leonard Taplin, Allan Runciman Brown, H. L. Fraser, Edward Patrick Kenny, and L. W. Rogers photographed a block of land stretching from the Turkish front lines 32 miles (51 km) deep into their rear areas. Beginning 5 January, they flew with a fighter escort to ward off enemy fighters. Using Royal Aircraft Factory BE.12 and Martin syde  airplanes, they not only overcame enemy air attacks, but also had to contend with 65 mph (105 km/h) winds, antiaircraft fire, and malfunctioning equipment to complete their task.

Commercial Aerial Photography

The first commercial aerial photography company in the UK was Aerofilms Ltd, founded by World War I veterans Francis Wills and Claude Graham White in 1919. The company soon expanded into a business with major contracts in Africa and Asia as well as in the UK. Operations began from the Stag Lane Aerodrome at Edgware, using the aircraft of the London Flying School. Subsequently the Aircraft Manufacturing Company(later the De Havilland Aircraft Company), hired an Airco DH.9 along with pilot entrepreneur Alan Cobham.

From 1921, Aerofilms carried out vertical photography for survey and mapping purposes. During the 1930s, the company pioneered the science of photo grammetry (mapping from aerial photographs), with the Ordnance Survey amongst the company's clients.

Another successful pioneer of the commercial use of aerial photography was the American Sherman Fairchild who started his own aircraft firm Fairchild Aircraft to develop and build specialized aircraft for high altitude aerial survey missions. One Fairchild aerial survey aircraft in 1935 carried unit that combined two synchronized cameras, and each camera having five six inch lenses with a ten inch lenses and took photos from 23,000 feet. Each photo covered two hundred and twenty five square miles. One of its first government contracts was an aerial survey of New Mexico to study soil erosion. A year later, Fairchild

5

Page 6: Gis and remote sensings

introduced a better high altitude camera with nine-lens in one unit that could take a photo of 600 square miles with each exposure from 30,000 feet.

World War II

In 1939 Sidney Cotton and Flying Officer Maurice Long bottom of the RAF were among the first to suggest that airborne reconnaissance may be a task better suited to fast, small aircraft which would use their speed and high service ceiling to avoid detection and interception. Although this seems obvious now, with modern reconnaissance tasks performed by fast, high flying aircraft, at the time it was radical thinking.

They proposed the use of Spitfires with their armament and radios removed and replaced with extra fuel and cameras. This led to the development of the Spitfire PR variants. Spitfires proved to be extremely successful in their reconnaissance role and there were many variants built specifically for that purpose. They served initially with what later became No. 1 Photographic Reconnaissance Unit (PRU). In 1928, the RAF developed an electric heating system for the aerial camera. This allowed reconnaissance aircraft to take pictures from very high altitudes without the camera parts freezing. Based at RAF Medmenham, the collection and interpretation of such photographs became a considerable enterprise.

Cotton's aerial photographs were far ahead of their time. Together with other members of the 1 PRU, he pioneered the techniques of high-altitude, high-speed stereoscopic photography that were instrumental in revealing the locations of many crucial military and intelligence targets. According to R.V. Jones, photographs were used to establish the size and the characteristic launching mechanisms for both the V-1 flying bomb and the V-2 rocket. Cotton also worked on ideas such as a prototype specialist reconnaissance aircraft and further refinements of photographic equipment. At the peak, the British flew over 100 reconnaissance flights a day, yielding 50,000 images per day to interpret. Similar efforts were taken by other countries.

Uses

Aerial photography is used in cartography (particularly in photogrammetric surveys, which are often the basis for topographic maps), land-use planning, archaeology, movie production, environmental studies, surveillance, commercial advertising, conveyancing, and artistic projects. An example of how aerial photography is used in the field of Archaeology is the mapping project done at the site Angkor Borei in Cambodia from 1995-1996. Using aerial photography, archaeologists were able to identify archaeological features, including 112 water features (reservoirs, artificially constructed pools and natural ponds) within the walled site of Angkor Borei. In the United States, aerial photographs are used in many Phase I Environmental Site Assessments for property analysis.

6

Page 7: Gis and remote sensings

Platforms

Radio-controlled model aircraft

Advances in radio controlled models have made it possible for model aircraft to conduct low-altitude aerial photography. This has benefited real-estate advertising, where commercial and residential properties are the photographic subject. Full-size, manned aircraft are prohibited from low flights above populated locations. Small scale model aircraft offer increased photographic access to these previously restricted areas. Miniature vehicles do not replace full size aircraft, as full size aircraft are capable of longer flight times, higher altitudes, and greater equipment payloads. They are, however, useful in any situation in which a full-scale aircraft would be dangerous to operate. Examples would include the inspection of transformers atop power transmission lines and slow, low-level flight over agricultural fields, both of which can be accomplished by a large-scale radio controlled helicopter. Professional-grade, gyroscopically stabilized camera platforms are available for use under such a model; a large model helicopter with a 26cc gasoline engine can hoist a payload of approximately seven kilograms (15 lbs).

Recent (2006) FAA regulations grounding all commercial RC model flights have been upgraded to require formal FAA certification before permission to fly at any altitude in USA.

In Australia Civil Aviation Safety Regulation 101 (CASR 101) allows for commercial use of radio control aircraft. Under these regulations radio controlled unmanned aircraft for commercial are referred to as Unmanned Aircraft Systems (UAS), where as radio controlled aircraft for recreational purposes are referred to as model aircraft. Under CASR 101, businesses/persons operating radio controlled aircraft commercially are required to hold an Operator Certificate, just like manned aircraft operators. Pilots of radio controlled aircraft operating commercially are also required to be licensed by the Civil Aviation Safety Authority (CASA). Whilst a small UAS and model aircraft may actually be identical, unlike model aircraft, a UAS may enter controlled airspace with approval, and operate within close proximity to an aerodrome.

Due to a number of illegal operators in Australia making false claims of being approved, CASA maintains and publishes a list of approved UAS operators because anything capable of being viewed from a public space is considered outside the realm of privacy in the United States, aerial photography may legally document features and occurrences on private property.

Types

7

Page 8: Gis and remote sensings

Oblique

Photographs taken at an angle are called oblique photographs. If they are taken from a low angle earth surface–aircraft, they are called low oblique and photographs taken from a high angle are called high or steep oblique.

Vertical

Vertical photographs are taken straight down. They are mainly used in photogrammetry and image interpretation. Pictures that will be used in photogrammetry are traditionally taken with special large format cameras with calibrated and documented geometric properties.

Combinations

Aerial photographs are often combined. Depending on their purpose it can be done in several ways, of which a few are listed below.

Panoramas can be made by stitching several photographs taken with one hand held camera.

In pictometry five rigidly mounted cameras provide one vertical and four low oblique pictures that can be used together.

In some digital cameras for aerial photogrammetry images from several imaging elements, sometimes with separate lenses, are geometrically corrected and combined to one image in the camera.

Orthophotos

Vertical photographs are often used to create orthophotos, alternatively known as orthophotomaps, photographs which have been geometrically "corrected" so as to be usable as a map. In other words, an orthophoto is a simulation of a photograph taken from an infinite distance, looking straight down to nadir. Perspective must obviously be removed, but variations in terrain should also be corrected for. Multiple geometric transformations are applied to the image, depending on the perspective and terrain corrections required on a particular part of the image.

Orthophotos are commonly used in geographic information systems, such as are used by mapping agencies (e.g. Ordnance Survey) to create maps. Once the images have been aligned, or "registered", with known real-world coordinates, they can be widely deployed.

Large sets of orthophotos, typically derived from multiple sources and divided into "tiles" (each typically 256 x 256 pixels in size), are widely used in online map systems such as Google Maps. Open Street Map offers the use of similar orthophotos for deriving new map data. Google Earth overlays orthophotos or satellite imagery onto a digital elevation model to simulate 3D landscapes.

8

Page 9: Gis and remote sensings

Aerial Video

With advancements in video technology, aerial video is becoming more popular. Orthogonal video is shot from aircraft mapping pipelines, crop fields, and other points of interest. Using GPS, video may be embedded with meta data and later synced with a video mapping program.

This "Spatial Multimedia" is the timely union of digital media including still photography, motion video, stereo, panoramic imagery sets, immersive media constructs, audio, and other data with location and date-time information from the GPS and other location designs.

Aerial videos are emerging Spatial Multimedia which can be used for scene understanding and object tracking. The input video is captured by low flying aerial platforms and typically consists of strong parallax from non-ground-plane structures. The integration of digital video, global positioning systems (GPS) and automated image processing will improve the accuracy and cost-effectiveness of data collection and reduction. Several different aerial platforms are under investigation for the data collection.

Satellite

In the context of spaceflight, a satellite is an artificial object which has been intentionally placed into orbit. Such objects are sometimes called artificial satellites to distinguish them from natural satellites such as the Moon.

The world's first artificial satellite, the Sputnik 1, was launched by the Soviet Union in 1957. Since then, thousands of satellites have been launched into orbit around the Earth. Some satellites, notably space stations, have been launched in parts and assembled in orbit. Artificial satellites originate from more than 50 countries and have used the satellite launching capabilities of ten nations. A few hundred satellites are currently operational, whereas thousands of unused satellites and satellite fragments orbit the Earth as space debris. A few space probes have been placed into orbit around other bodies and become artificial satellites to the Moon, Mercury, Venus, Mars, Jupiter, Saturn, Vesta, Eros, and the Sun.

Satellites are used for a large number of purposes. Common types include military and civilian Earth observation satellites, communications satellites, navigation satellites, weather satellites, and research satellites. Space stations and human spacecraft in orbit are also satellites. Satellite orbits vary greatly, depending on the purpose of the satellite, and are classified in a number of ways. Well-known (overlapping) classes include low Earth orbit, polar orbit, and geostationary orbit.

About 6,600 satellites have been launched. The latest estimates are that 3,600 remain in orbit. Of those, about 1,000 are operational;[2][3] the rest have lived out their useful lives and are part of the space debris. Approximately 500 operational satellites are in low-Earth orbit, 50 are in medium-Earth orbit (at 20,000 km), the rest are in geostationary orbit (at 36,000 km).

9

Page 10: Gis and remote sensings

Satellites are propelled by rockets to their orbits. Usually the launch vehicle itself is a rocket lifting off from a launch pad on land. In a minority of cases satellites are launched at sea (from a submarine or a mobile maritime platform) or aboard a plane.

Satellites are usually semi-independent computer-controlled systems. Satellite subsystems attend many tasks, such as power generation, thermal control, telemetry, attitude control and orbit control.

Space Surveillance Network

The United States Space Surveillance Network (SSN), a division of The United States Strategic Command, has been tracking objects in Earth's orbit since 1957 when the Soviets opened the space age with the launch of Sputnik I. Since then, the SSN has tracked more than 26,000 objects. The SSN currently tracks more than 8,000 man-made orbiting objects. The rest have re-entered Earth's atmosphere and disintegrated, or survived re-entry and impacted the Earth. The SSN tracks objects that are 10 centimeters in diameter or larger; those now orbiting Earth range from satellites weighing several tons to pieces of spent rocket bodies weighing only 10 pounds. About seven percent are operational satellites (i.e. ~560 satellites), the rest are space debris. The United States Strategic Command is primarily interested in the active satellites, but also tracks space debris which upon reentry might otherwise be mistaken for incoming missiles.

A search of the NSSDC Master Catalog at the end of October 2010 listed 6,578 satellites launched into orbit since 1957, the latest being Chang'e 2, on 1 October 2010.

Non-Military Satellite Services

There are three basic categories of non-military satellite services:

Fixed satellite services

Fixed satellite services handle hundreds of billions of voice, data, and video transmission tasks across all countries and continents between certain points on the Earth's surface.

Mobile satellite systems

Mobile satellite systems help connect remote regions, vehicles, ships, people and aircraft to other parts of the world and/or other mobile or stationary communications units, in addition to serving as navigation systems.

Scientific research satellites (commercial and noncommercial)

Scientific research satellites provide meteorological information, land survey data (e.g. remote sensing), Amateur (HAM) Radio, and other different scientific research applications such as earth science, marine science, and atmospheric research.

Types

Anti-Satellite weapons/"Killer Satellites"  are satellites that are designed to destroy enemy warheads, satellites, and other space assets.

10

Page 11: Gis and remote sensings

Astronomical satellites  are satellites used for observation of distant planets, galaxies, and other outer space objects.

Biosatellites  are satellites designed to carry living organisms, generally for scientific experimentation.

Communications satellites  are satellites stationed in space for the purpose of telecommunications. Modern communications satellites typically use geosynchronous orbits, Molniya orbits or Low Earth orbits.

Miniaturized satellites  are satellites of unusually low masses and small sizes. New classifications are used to categorize these satellites: mini satellite (500–100 kg), microsatellite (below 100 kg), nanosatellite (below 10 kg).

Navigational satellites  are satellites which use radio time signals transmitted to enable mobile receivers on the ground to determine their exact location. The relatively clear line of sight between the satellites and receivers on the ground, combined with ever-improving electronics, allows satellite navigation systems to measure location to accuracies on the order of a few meters in real time.

Reconnaissance satellites  are Earth observation satellite or communications satellite deployed for military or intelligence applications. Very little is known about the full power of these satellites, as governments who operate them usually keep information pertaining to their reconnaissance satellites classified.

Earth observation satellites  are satellites intended for non-military uses such as environmental monitoring, meteorology, map making etc. (See especially Earth Observing System.)

Tether satellites  are satellites which are connected to another satellite by a thin cable called a tether.

Weather satellites  are primarily used to monitor Earth's weather and climate.

Recovery satellites  are satellites that provide a recovery of reconnaissance, biological, space-production and other payloads from orbit to Earth.

Manned spacecraft (spaceships) are large satellites able to put humans into (and beyond) an orbit, and return them to Earth. Spacecraft including space planes of reusable systems have major propulsion or landing facilities. They can be used as transport to and from the orbital stations.

Space stations are man-made orbital structures that are designed for human beings to live on in outer space. A space station is distinguished from other manned spacecraft by its lack of major propulsion or landing facilities. Space stations are designed for medium-term living in orbit, for periods of weeks, months, or even years.

11

Page 12: Gis and remote sensings

A Skyhook is a proposed type of tethered satellite/ion powered space station that serves as a terminal for suborbital launch vehicles flying between the Earth and the lower end of the Skyhook, as well as a terminal for spacecraft going to, or arriving from, higher orbit, the Moon, or Mars, at the upper end of the Skyhook

Orbit Types

The first satellite, Sputnik 1, was put into orbit around Earth and was therefore in geocentric orbit. By far this is the most common type of orbit with approximately 2,456 artificial satellites orbiting the Earth. Geocentric orbits may be further classified by their altitude, inclination and eccentricity.

The commonly used altitude classifications of geocentric orbit are Low Earth orbit (LEO), Medium Earth orbit (MEO) and High Earth orbit (HEO). Low Earth orbit is any orbit below 2,000 km. Medium Earth orbit is any orbit between 2,000km-35,786 km. High Earth orbit is any orbit higher than 35,786 km.

Centric classifications

Geocentric orbit: An orbit around the planet Earth, such as the Moon or artificial satellites. Currently there are approximately 2,465 artificial satellites orbiting the Earth.

Heliocentric orbit: An orbit around the Sun. In our Solar System, all planets, comets, and asteroids are in such orbits, as are many artificial satellites and pieces of space debris. Moons by contrast are not in a heliocentric orbit but rather orbit their parent planet.

Areocentric orbit: An orbit around the planet Mars, such as by moons or artificial satellites.

The general structure of a satellite is that it is connected to the earth stations that are present on the ground and connected through terrestrial links.

Altitude classifications

Low Earth orbit (LEO): Geocentric orbits ranging in altitude from 0–2000 km (0–1240 miles)

Medium Earth orbit (MEO): Geocentric orbits ranging in altitude from 2,000 km (1,200 mi)-35,786 km (22,236 mi). Also known as an intermediate circular orbit.

Geosynchronous Orbit (GEO): Geocentric circular orbit with an altitude of 35,786 kilometres (22,236 mi). The period of the orbit equals one sidereal day, coinciding with the rotation period of the Earth. The speed is approximately 3,000 metres per second (9,800 ft/s).

High Earth orbit (HEO): Geocentric orbits above the altitude of geosynchronous orbit 35,786 km (22,236 mi).

12

Page 13: Gis and remote sensings

Inclination classifications

Inclined orbit: An orbit whose inclination in reference to the equatorial plane is not zero degrees. Polar orbit: An orbit that passes above or nearly above both poles of the

planet on each revolution. Therefore it has an inclination of (or very close to) 90 degrees.

Polar sun synchronous orbit: A nearly polar orbit that passes the equator at the same local time on every pass. Useful for image taking satellites because shadows will be nearly the same on every pass.

Eccentricity classifications

Circular orbit: An orbit that has an eccentricity of 0 and whose path traces a circle. Hohmann transfer orbit: An orbit that moves a spacecraft from one

approximately circular orbit, usually the orbit of a planet, to another, using two engine impulses. The perihelion of the transfer orbit is at the same distance from the Sun as the radius of one planet's orbit, and the aphelion is at the other. The two rocket burns change the spacecraft's path from one circular orbit to the transfer orbit, and later to the other circular orbit. This maneuver was named after Walter Hohmann.

Elliptic orbit: An orbit with an eccentricity greater than 0 and less than 1 whose orbit traces the path of an ellipse.

Geosynchronous transfer orbit: An elliptic orbit where the perigee is at the altitude of a Low Earth orbit (LEO) and the apogee at the altitude of a geosynchronous orbit.

Geostationary transfer orbit: An elliptic orbit where the perigee is at the altitude of a Low Earth orbit (LEO) and the apogee at the altitude of a geostationary orbit.

Molniya orbit: A highly elliptic orbit with inclination of 63.4° and orbital period of half of a sidereal day (roughly 12 hours). Such a satellite spends most of its time over two designated areas of the planet(specifically Russia and the United States).

Tundra orbit: A highly elliptic orbit with inclination of 63.4° and orbital period of one sidereal day (roughly 24 hours). Such a satellite spends most of its time over a single designated area of the planet.

13

Page 14: Gis and remote sensings

Synchronous classifications

Synchronous orbit: An orbit where the satellite has an orbital period equal to the average rotational period (earth's is: 23 hours, 56 minutes, 4.091 seconds) of the body being orbited and in the same direction of rotation as that body. To a ground observer such a satellite would trace an analemma (figure 8) in the sky.

Semi-synchronous orbit (SSO): An orbit with an altitude of approximately 20,200 km (12,600 mi) and an orbital period equal to one-half of the average rotational period (earth's is approximately 12 hours) of the body being orbited

Geosynchronous orbit (GSO): Orbits with an altitude of approximately 35,786 km (22,236 mi). Such a satellite would trace an analemma (figure 8) in the sky.

Geostationary orbit (GEO): A geosynchronous orbit with an inclination of zero. To an observer on the ground this satellite would appear as a fixed point in the sky.

Clarke orbit: Another name for a geostationary orbit. Named after scientist and writer Arthur C. Clarke.

Super synchronous orbit: A disposal / storage orbit above GSO/GEO. Satellites will drift west. Also a synonym for Disposal orbit.

Sub synchronous orbit: A drift orbit close to but below GSO/GEO. Satellites will drift east.

Graveyard orbit: An orbit a few hundred kilometers above geosynchronous that satellites are moved into at the end of their operation.

Disposal orbit: A synonym for graveyard orbit.

Junk orbit: A synonym for graveyard orbit.

Aero synchronous orbit: A synchronous orbit around the planet Mars with an orbital period equal in length to Mars' sidereal day, 24.6229 hours.

Aero stationary orbit (ASO): A circular aero synchronous orbit on the equatorial plane and about 17000 km (10557 miles) above the surface. To an observer on the ground this satellite would appear as a fixed point in the sky.

Helio synchronous orbit: A heliocentric orbit about the Sun where the satellite's orbital period matches the Sun's period of rotation. These orbits occur at a radius of 24,360 Gm (0.1628 AU) around the Sun, a little less than half of the orbital radius of Mercury.

14

Page 15: Gis and remote sensings

Special classifications

Sun-synchronous orbit: An orbit which combines altitude and inclination in such a way that the satellite passes over any given point of the planets' surface at the same local solar time. Such an orbit can place a satellite in constant sunlight and is useful for imaging, spy, and weather satellites.

Moon orbit: The orbital characteristics of Earth's Moon. Average altitude of 384,403 kilometers (238,857 mi), elliptical–inclined orbit.

Pseudo-orbit classifications

Horseshoe orbit: An orbit that appears to a ground observer to be orbiting a certain planet but is actually in co-orbit with the planet. See asteroids 3753 (Cruithne) and 2002 AA29.

Exo-orbit: A maneuver where a spacecraft approaches the height of orbit but lacks the velocity to sustain it.

Suborbital spaceflight: A synonym for exo-orbit.

Lunar transfer orbit (LTO)

Prograde orbit: An orbit with an inclination of less than 90°. Or rather, an orbit that is in the same direction as the rotation of the primary.

Retrograde orbit: An orbit with an inclination of more than 90°. Or rather, an orbit counter to the direction of rotation of the planet. Apart from those in sun-synchronous orbit, few satellites are launched into retrograde orbit because the quantity of fuel required to launch them is much greater than for a prograde orbit. This is because when the rocket starts out on the ground, it already has an eastward component of velocity equal to the rotational velocity of the planet at its launch latitude.

Halo orbit and Lissajous orbit: Orbits "around" Lagrangian points.

Satellite Subsystems

The satellite's functional versatility is imbedded within its technical components and its operations characteristics. Looking at the "anatomy" of a typical satellite, one discovers two modules. Note that some novel architectural concepts such as Fractionated Spacecraft somewhat upset this taxonomy.

Spacecraft bus or service module

This bus module consist of the following subsystems:

The Structural Subsystem

15

Page 16: Gis and remote sensings

The structural subsystem provides the mechanical base structure with adequate stiffness to withstand stress and vibrations experienced during launch, maintain structural integrity and stability while on station in orbit, and shields the satellite from extreme temperature changes and micro-meteorite damage.

The Telemetry Subsystem (aka Command and Data Handling, C&DH)

The telemetry subsystem monitors the on-board equipment operations, transmits equipment operation data to the earth control station, and receives the earth control station's commands to perform equipment operation adjustments.

The Power Subsystem

The power subsystem consists of solar panels to convert solar energy into electrical power, regulation and distribution functions, and batteries that store power and supply the satellite when it passes into the Earth's shadow. Nuclear power sources (Radioisotope thermoelectric generator have also been used in several successful satellite programs including the Nimbus program (1964–1978).

The Thermal Control Subsystem

The thermal control subsystem helps protect electronic equipment from extreme temperatures due to intense sunlight or the lack of sun exposure on different sides of the satellite's body (e.g. Optical Solar Reflector)

The Attitude and Orbit Control Subsystem

The attitude and orbit control subsystem consists of sensors to measure vehicle orientation; control laws embedded in the flight software; and actuators (reaction wheels, thrusters) to apply the torques and forces needed to re-orient the vehicle to a desired attitude, keep the satellite in the correct orbital position and keep antennas positioning in the right directions.

Communication payload

The second major module is the communication payload, which is made up of transponders. A transponder is capable of :

Receiving uplinked radio signals from earth satellite transmission stations (antennas).

Amplifying received radio signals

16

Page 17: Gis and remote sensings

Sorting the input signals and directing the output signals through input/output signal multiplexers to the proper downlink antennas for retransmission to earth satellite receiving stations (antennas).

End of Life

When satellites reach the end of their mission, satellite operators have the option of de-orbiting the satellite, leaving the satellite in its current orbit or moving the satellite to a graveyard orbit. Historically, due to budgetary constraints at the beginning of satellite missions, satellites were rarely designed to be de-orbited. One example of this practice is the satellite Vanguard 1. Launched in 1958, Vanguard 1, the 4th manmade satellite put in Geocentric orbit, was still in orbit as of August 2009.

Instead of being de-orbited, most satellites are either left in their current orbit or moved to a graveyard orbit. As of 2002, the FCC requires all geostationary satellites to commit to moving to a graveyard orbit at the end of their operational life prior to launch. In cases of uncontrolled de-orbiting, the major variable is the solar flux, and the minor variables the components and form factors of the satellite itself, and the gravitational perturbations generated by the Sun and the Moon (as well as those exercised by large mountain ranges, whether above or below sea level). The nominal breakup altitude due to aerodynamic forces and temperatures is 78 km, with a range between 72 and 84 km. Solar panels, however, are destroyed before any other component at altitudes between 90 and 95 km.

UNIT-II

Image Interpretation

To derive useful spatial information from images is the task of image interpretation. It includes

ï detection: such as search for hot spots in mechanical and electrical facilities and white spot in x-ray images. This procedure is often used as the first step of image interpretation.

ï identification: recognition of certain target. A simple example is to identify vegetation types, soil types, rock types and water bodies. The higher the spatial/spectral resolution of an image, the more detail we can derive from the image.

ï delineation: to outline the recognized target for mapping purposes. Identification and delineation combined together are used to map certain subjects. If the whole image is to be processed by these two procedures, we call it image classification.

ï enumeration: to count certain phenomena from the image. This is done based on detection and identification. For example, in order to estimate household income of the population, we can count the number of various residential units.

17

Page 18: Gis and remote sensings

ï mensuration: to measure the area, the volume, the amount,and the length of certain target from an image. This often involves all the procedures mentioned above. Simple examples include measuring the length of a river and the acreage of a specific land-cover class. More complicated examples include an estimation of timber volume, river discharge, crop productivity, river basin radiation and evapotranspiration.

In order to do a good job in the image interpretation, and in later digital image analysis, one has to be familiar with the subject under investigation, the study area and the remote sensing system available to him. Usually, a combined team consisting of the subject specialists and the remote sensing image analysis specialists is required for a relatively large image interpretation task.

Depending on the facilities that an image interpreter has, he might interpret images in raw form, corrected form or enhanced form. Correction and enhancement are usually done digitally.

Elements on which image interpretation are based

ï Image tone, grey level, or multispectral grey-level vector

Human eyes can differentiate over 1000 colors but only about 16 grey levels. Therefore, colour images are preferred in image interpretation. One difficulty involved is use of multispectral image with a dimensionality of over 3. In order to make use of all the information available in each band of image, one has to somehow reduce the image dimensionality.

ï Image texture

Spatial variation of image tones. Texture is used as an important clue in image interpretation. It is very easy for human interpreters to include it in their mental process. Most texture patterns appear irregular on an image.

ï Pattern

Regular arrangement of ground objects. Examples are residential area on an aerial photograph and mountains in regular arrangement on a satellite imagery.

ï Association

A specific object co-occurring with another object. Some examples of association are an outdoor swimming pool associated with a recreation center and a playground associated with a school.

ï Shadow

18

Page 19: Gis and remote sensings

Object shadow is very useful when the phenomena under study have vertical variation. Examples include trees, high buildings, mountains, etc.

ï Shape

Agricultural fields and human-built structures have regular shapes. These can be used to identify various target.

ï Size

Relative size of buildings can tell us about the type of land uses while relative sizes of tree crowns can tell us about the approximate age of trees.

ï Site

Broad leaf trees are distributed at lower and warmer valleys while coniferous trees tend to be distributed on a higher elevation, such as tundra. Location is used in image interpretation.

Image interpretation strategies

Direct recognition: Identification of targets.

Land-cover classification

(Land cover is the physical evidence of the earth's surface.)

- indirect interpretation

to map something that is not directly observable in the image. This is used to classify land use types (Gong and Howarth, 1992b). Land-use is the human activities on a piece of land. It is closely related to land-cover types. For example, a residential land-use type is composed of roof cover, lawn, trees and paved surfaces.

- from known to unknown

To interpret an area where the interpreter is familiar with first, then interpret the areas where the interpreter is not familiar with (Chen et al, 1989). This can be assisted by field observation

- from direct to indirect

In order to obtain forest volume, one might have to determine what is observable from the image, such as tree canopies, shadows etc. Then the volume can be derived. We can also estimate the depth of permafrost using the surface cover information (Peddle, 1991).19

Page 20: Gis and remote sensings

- Use of collateral information

Census data,and topographical maps and other thematic maps may all be useful during image interpretation.

Principles of Image Interpretation

Strategy for Image Interpretation and Differential Diagnosis

This section is included to aid the beginning surgeon or oncologist in developing a basic strategy for image interpretation. Normally, the radiologist chooses and supervises the appropriate imaging study, evaluates and interprets the images, and communicates its significance to the referring physician. However, frequent dialogue between the referring physician and the radiologist will significantly improve interpretation of the imaging study. Accurately interpreting an imaging study of the head and neck requires a systematic method of observation, knowledge of the complex anatomy and pathophysiology, and an understanding of imaging principles. The differential diagnosis of lesions of the head and neck requires a systematic approach as well. One such diagnostic imaging process is summarized here:

   1.    Obtain clinical data: age, sex, history, physical findings.

2.    Survey the films for all …

4. Visual Image Interpretation

Virtually all people live with the visual perception of his/her environment. This experience is also used to interpret images (in 2D) and 3-dimensional structures and specimens.

The visual interpretation of satelllite images is a complex process. It includes the meaning of the image content but also goes beyond what can be seen on the image in order to recognise spatial and landscape patterns. This process can be roughly divided into 2 levels:

1. The recognition of objects such as streets, fields, rivers, etc. The quality of recognition depends on the expertise in image interpretation and visual perception.

2. A true interpretation can be ascertained through conclusions (from previously recognized objects) of situations, recovery, etc. Subject specific knowledge and expertise are crucial.

Interpretation Factors↓

The first step recognition of objects and structures, relates to the followong saying: "I can recognize in an image only what I already know." Hence, previous knowledge and

20

Page 21: Gis and remote sensings

experience play a very large role in the interpretation process as only through subject specific knowledge connections can be made between the key underlying processes.

Both steps, recognition and interpretation, do not "mechanically" follow one another, but rather run through a repetitive process, where both steps heavily rely on one another (Albertz 2007).

The Practice of Image Interpretation

Acquisition of documents: Satellite images, maps, etc. Pre-interpretation: gross distribution, apportionment of the area, etc.

Partial land pre-investigation: Recognition of regional particularities

Detail interpretation: Core of the work: areas will be individually considered, objects will be recognised and compared to maps. Objects that are easily identifiable are addressed first.

Land Examination / Field Comparison: a method to double check uncertain interpretation results

Depiction of the results: through maps, map-like sketches, thematic mapping, etc.

5. Image Processing

Corrections

Image processing is a process which makes an image interpretable for a specific use. There are many methods, but only the most common will be presented here.

Geometric Correction

The geometric correction of image data is an important prerequisitewhich must be performed prior to using images in geographic information systems (GIS) and other image processing programs. To process the data with other data or maps in a GIS, all of the data must have the same reference system. A geometrical correction, also called geo-referencing, is a procedure where the content of a map will be assigned a spatial coordinate system (for example, geographical latitude and longitude).

In geo-referencing, image points and pass points need to be searched, which then can be recognized in the coordinates. Pass points are usually determined with a GPS receiver on the terrain or with maps. Visual street crossings, bridges over water, etc. can be identified, and their coordinates will be noted. These points will then be coordinated with identical image points of the not yet geo-referenced satellite image. These correlations can ensure projections with the help of various additional procedures.21

Page 22: Gis and remote sensings

Radiometric Correction

System corrections are important, when technical defects and deficiencies of the sensor and data transfer systems lead to mistakes in the image data construction. Causes can be detector failure and/or power failure from detectors operating simultaneously.

In scanners such as Land sat TM and MSS with 6 respectively 15 scan rows which are used for the same spectral area, a failure of scan rows occurs. These errors always appear at the same intervals and create a characteristic striping (banding) in the image.

Image enhancement

Why do we enhance satellite images? Different methods of image enhancement are used to prepare the "raw data" so that the actual analysis of images will be easier, faster and more reliable. The choice of method is dependent on the objective of the analysis. Two processes are presented below:

Histogram Stretches

In digital image processing the statistics of images are portrayed in agreyscale histogram (frequency distribution of grey values)

The form of a histogram describes the contrast range of a satellite image and permits comments about its homogeneity. For example, a grey scale distribution with an extreme maximum indicates small contrast. A simply stretched maximum indicates homogeneity in the image, but also a larger contrast range.

A histogram stretch is a method to process individual values in the image. The stretch is used as a contrasting presentation of the data. The contrast stretch can be used in many different processes. The entry data will always be stretched over the entire area of 0-255.

Filter

So called filter operations change image structures by calculating greyscale value relations of the neighbouring pixels. The filters use coefficient matrixes which cut a small area or matrix out of the original image centered on an individual image point. The filter/matrix then has to "run" over the entire image.

UNIT-IV

Geographic information system

A geographic information system (GIS) is a computer system designed to capture, store, manipulate, analyze, manage, and present all types of geographical data. The acronym GIS is sometimes used for geographical information science or geospatial information

22

Page 23: Gis and remote sensings

studies to refer to the academic discipline or career of working with geographic information systems and is a large domain within the broader academic discipline of Geo informatics.

GIS can be thought of as a system that provides spatial data entry, management, retrieval, analysis, and visualization functions. The implementation of a GIS is often driven by jurisdictional (such as a city), purpose, or application requirements. Generally, a GIS implementation may be custom-designed for an organization. Hence, a GIS deployment developed for an application, jurisdiction, enterprise, or purpose may not be necessarily interoperable or compatible with a GIS that has been developed for some other application, jurisdiction, enterprise, or purpose. What goes beyond a GIS is a spatial data infrastructure, a concept that has no such restrictive boundaries.

In a general sense, the term describes any information system that integrates stores, edits, analyzes, shares, and displays geographic information for informing decision making. GIS applications are tools that allow users to create interactive queries (user-created searches), analyze spatial information, edit data in maps, and present the results of all these operations. Geographic information science is the science underlying geographic concepts, applications, and systems.

The first known use of the term "Geographic Information System" was by Roger Tomlinson in the year 1968 in his paper "A Geographic Information System for Regional Planning". Tomlinson is also acknowledged as the "father of GIS"

Application

GIS is a relatively broad term that can refer to a number of different technologies, processes, and methods. It is attached to many operations and has many applications related to engineering, planning, management, transport/logistics, insurance, telecommunications, and business. For that reason, GIS and location intelligence applications can be the foundation for many location-enabled services that rely on analysis, visualization and dissemination of results for collaborative decision making.

History and Development

One of the first applications of spatial analysis in epidemiology is the 1832 "Rapport sur la marche et les effets du choléra dans Paris et le département de la Seine". The French geographer Charles Picquet represented the 48 districts of the city of Paris by halftone color gradient according to the percentage of deaths by cholera per 1,000 inhabitants.

In 1854 John Snow depicted a cholera outbreak in London using points to represent the locations of some individual cases, possibly the earliest use of a geographic methodology in epidemiology. His study of the distribution of cholera led to the source of the disease, a contaminated water pump (the Broad Street Pump, whose handle he disconnected, thus terminating the outbreak).

23

Page 24: Gis and remote sensings

While the basic elements of topography and theme existed previously in cartography, the John Snow map was unique, using cartographic methods not only to depict but also to analyze clusters of geographically dependent phenomena.

The early 20th century saw the development of photozincography, which allowed maps to be split into layers, for example one layer for vegetation and another for water. This was particularly used for printing contours – drawing these was a labour-intensive task but having them on a separate layer meant they could be worked on without the other layers to confuse the draughtsman. This work was originally drawn on glass plates but later plastic film was introduced, with the advantages of being lighter, using less storage space and being less brittle, among others. When all the layers were finished, they were combined into one image using a large process camera. Once color printing came in, the layers idea was also used for creating separate printing plates for each colour. While the use of layers much later became one of the main typical features of a contemporary GIS, the photographic process just described is not considered to be a GIS in itself – as the maps were just images with no database to link them to.

Computer hardware development spurred by nuclear weapon research led to general-purpose computer "mapping" applications by the early 1960s.

The year 1960 saw the development of the world's first true operational GIS in Ottawa, Ontario, Canada by the federal Department of Forestry and Rural Development. Developed by Dr. Roger Tomlinson, it was called the Canada Geographic Information System (CGIS) and was used to store, analyze, and manipulate data collected for the Canada Land Inventory – an effort to determine the land capability for rural Canada by mapping information about soils, agriculture, recreation, wildlife, waterfowl, forestry and land use at a scale of 1:50,000. A rating classification factor was also added to permit analysis.

CGIS was an improvement over "computer mapping" applications as it provided capabilities for overlay, measurement, and digitizing/scanning. It supported a national coordinate system that spanned the continent, coded lines as arcs having a true embedded topology and it stored the attribute and locational information in separate files. As a result of this, Tomlinson has become known as the "father of GIS", particularly for his use of overlays in promoting the spatial analysis of convergent geographic data.

CGIS lasted into the 1990s and built a large digital land resource database in Canada. It was developed as a mainframe-based system in support of federal and provincial resource planning and management. Its strength was continent-wide analysis of complex datasets. The CGIS was never available commercially.

In 1964 Howard T. Fisher formed the Laboratory for Computer Graphics and Spatial Analysis at the Harvard Graduate School of Design (LCGSA 1965–1991), where a number of important theoretical concepts in spatial data handling were developed, and which by the 1970s had distributed seminal software code and systems, such as SYMAP, GRID, and ODYSSEY – that served as sources for subsequent commercial development—to universities, research centers and corporations worldwide.

By the early 1980s, M&S Computing (later Intergraph) along with Bentley Systems Incorporated for the CAD platform, Environmental Systems Research Institute (ESRI), 24

Page 25: Gis and remote sensings

CARIS (Computer Aided Resource Information System), MapInfo Corporation and ERDAS (Earth Resource Data Analysis System) emerged as commercial vendors of GIS software, successfully incorporating many of the CGIS features, combining the first generation approach to separation of spatial and attribute information with a second generation approach to organizing attribute data into database structures. In parallel, the development of two public domain systems (MOSS and GRASS GIS) began in the late 1970s and early 1980s.

In 1986, Mapping Display and Analysis System (MIDAS), the first desktop GIS product emerged for the DOS operating system. This was renamed in 1990 to MapInfo for Windows when it was ported to the Microsoft Windows platform. This began the process of moving GIS from the research department into the business environment.

By the end of the 20th century, the rapid growth in various systems had been consolidated and standardized on relatively few platforms and users were beginning to explore viewing GIS data over the Internet, requiring data format and transfer standards. More recently, a growing number of free, open-source GIS packages run on a range of operating systems and can be customized to perform specific tasks. Increasingly geospatial data and mapping applications are being made available via the world wide web.

GIS Techniques and Technology

Modern GIS technologies use digital information, for which various digitized data creation methods are used. The most common method of data creation is digitization, where a hard copy map or survey plan is transferred into a digital medium through the use of a CAD program, and geo-referencing capabilities. With the wide availability of ortho-rectified imagery (both from satellite and aerial sources), heads-up digitizing is becoming the main avenue through which geographic data is extracted. Heads-up digitizing involves the tracing of geographic data directly on top of the aerial imagery instead of by the traditional method of tracing the geographic form on a separate digitizing tablet (heads-down digitizing).

Relating information from different sources

GIS uses spatio-temporal (space-time) location as the key index variable for all other information. Just as a relational database containing text or numbers can relate many different tables using common key index variables, GIS can relate unrelated information by using location as the key index variable. The key is the location and/or extent in space-time.

Any variable that can be located spatially, and increasingly also temporally, can be referenced using a GIS. Locations or extents in Earth space–time may be recorded as dates/times of occurrence, and x, y, and z coordinates representing, longitude, latitude, and elevation, respectively. These GIS coordinates may represent other quantified systems of temporo-spatial reference (for example, film frame number, stream gage station, highway mile-marker, surveyor benchmark, building address, street intersection, entrance

25

Page 26: Gis and remote sensings

gate, water depth sounding, POS or CAD drawing origin/units). Units applied to recorded temporal-spatial data can vary widely (even when using exactly the same data, see map projections), but all Earth-based spatial–temporal location and extent references should, ideally, be relatable to one another and ultimately to a "real" physical location or extent in space–time.

Related by accurate spatial information, an incredible variety of real-world and projected past or future data can be analyzed, interpreted and represented to facilitate education and decision making. This key characteristic of GIS has begun to open new avenues of scientific inquiry into behaviors and patterns of previously considered unrelated real-world information.

GIS uncertainties

GIS accuracy depends upon source data, and how it is encoded to be data referenced. Land surveyors have been able to provide a high level of positional accuracy utilizing the GPS-derived positions. High-resolution digital terrain and aerial imagery, powerful computers and Web technology are changing the quality, utility, and expectations of GIS to serve society on a grand scale, but nevertheless there are other source data that have an impact on overall GIS accuracy like paper maps, though these may be of limited use in achieving the desired accuracy since the aging of maps affects their dimensional stability.

In developing a digital topographic data base for a GIS, topographical maps are the main source, and aerial photography and satellite images are extra sources for collecting data and identifying attributes which can be mapped in layers over a location facsimile of scale. The scale of a map and geographical rendering area representation type are very important aspects since the information content depends mainly on the scale set and resulting locatability of the map's representations. In order to digitize a map, the map has to be checked within theoretical dimensions, and then scanned into a raster format, and resulting raster data has to be given a theoretical dimension by a rubber sheeting/warping technology process.

A quantitative analysis of maps brings accuracy issues into focus. The electronic and other equipment used to make measurements for GIS is far more precise than the machines of conventional map analysis. All geographical data are inherently inaccurate, and these inaccuracies will propagate through GIS operations in ways that are difficult to predict.

Data representation

GIS data represents real objects (such as roads, land use, elevation, trees, waterways, etc.) with digital data determining the mix. Real objects can be divided into two abstractions: discrete objects (e.g., a house) and continuous fields (such as rainfall amount, or elevations). Traditionally, there are two broad methods used to store data in a GIS for both kinds of abstractions mapping references: raster images and vector. Points, lines, and polygons are the stuff of mapped location attribute references. A new hybrid method of storing data is that of identifying point clouds, which combine three-dimensional points with RGB information at each point, returning a "3D color image". GIS thematic maps then

26

Page 27: Gis and remote sensings

are becoming more and more realistically visually descriptive of what they set out to show or determine.

Data capture

Data capture—entering information into the system—consumes much of the time of GIS practitioners. There are a variety of methods used to enter data into a GIS where it is stored in a digital format.

Existing data printed on paper or PET film maps can be digitized or scanned to produce digital data. A digitizer produces vector data as an operator traces points, lines, and polygon boundaries from a map. Scanning a map results in raster data that could be further processed to produce vector data.

Survey data can be directly entered into a GIS from digital data collection systems on survey instruments using a technique called coordinate geometry (COGO). Positions from a global navigation satellite system (GNSS) like Global Positioning System can also be collected and then imported into a GIS. A current trend in data collection gives users the ability to utilize field computers with the ability to edit live data using wireless connections or disconnected editing sessions. This has been enhanced by the availability of low-cost mapping-grade GPS units with decimeter accuracy in real time. This eliminates the need to post process, import, and update the data in the office after fieldwork has been collected. This includes the ability to incorporate positions collected using a laser rangefinder. New technologies also allow users to create maps as well as analysis directly in the field, making projects more efficient and mapping more accurate.

Remotely sensed data also plays an important role in data collection and consist of sensors attached to a platform. Sensors include cameras, digital scanners and LIDAR, while platforms usually consist of aircraft and satellites. Recently with the development of miniature UAVs, aerial data collection is becoming possible at much lower costs, and on a more frequent basis. For example, the Aeryon Scout was used to map a 50-acre area with a Ground sample distance of 1 inch (2.54 cm) in only 12 minutes.

The majority of digital data currently comes from photo interpretation of aerial photographs. Soft-copy workstations are used to digitize features directly from stereo pairs of digital photographs. These systems allow data to be captured in two and three dimensions, with elevations measured directly from a stereo pair using principles of photogrammetry. Analog aerial photos must be scanned before being entered into a soft-copy system, for high-quality digital cameras this step is skipped.

Satellite remote sensing provides another important source of spatial data. Here satellites use different sensor packages to passively measure the reflectance from parts of the electromagnetic spectrum or radio waves that were sent out from an active sensor such as radar. Remote sensing collects raster data that can be further processed using different bands to identify objects and classes of interest, such as land cover.

When data is captured, the user should consider if the data should be captured with either a relative accuracy or absolute accuracy, since this could not only influence how information will be interpreted but also the cost of data capture.

27

Page 28: Gis and remote sensings

After entering data into a GIS, the data usually requires editing, to remove errors, or further processing. For vector data it must be made "topologically correct" before it can be used for some advanced analysis. For example, in a road network, lines must connect with nodes at an intersection. Errors such as undershoots and overshoots must also be removed. For scanned maps, blemishes on the source map may need to be removed from the resulting raster. For example, a fleck of dirt might connect two lines that should not be connected.

Raster-to-vector translation

Data restructuring can be performed by a GIS to convert data into different formats. For example, a GIS may be used to convert a satellite image map to a vector structure by generating lines around all cells with the same classification, while determining the cell spatial relationships, such as adjacency or inclusion.

More advanced data processing can occur with image processing, a technique developed in the late 1960s by NASA and the private sector to provide contrast enhancement, false colour rendering and a variety of other techniques including use of two dimensional Fourier transforms. Since digital data is collected and stored in various ways, the two data sources may not be entirely compatible. So a GIS must be able to convert geographic data from one structure to another.

Projections, coordinate systems, and registration

The earth can be represented by various models, each of which may provide a different set of coordinates (e.g., latitude, longitude, elevation) for any given point on the Earth's surface. The simplest model is to assume the earth is a perfect sphere. As more measurements of the earth have accumulated, the models of the earth have become more sophisticated and more accurate. In fact, there are models called datums that apply to different areas of the earth to provide increased accuracy, like NAD83 for U.S. measurements, and the World Geodetic System for worldwide measurements.

Spatial analysis with GIS

GIS spatial analysis is a rapidly changing field, and GIS packages are increasingly including analytical tools as standard built-in facilities, as optional toolsets, as add-ins or 'analysts'. In many instances these are provided by the original software suppliers (commercial vendors or collaborative non commercial development teams), whilst in other cases facilities have been developed and are provided by third parties. Furthermore, many products offer software development kits (SDKs), programming languages and language support, scripting facilities and/or special interfaces for developing one's own analytical tools or variants. The website "Geospatial Analysis" and associated book/ebook attempt to provide a reasonably comprehensive guide to the subject. The increased availability has created a new dimension to business intelligence termed "spatial intelligence" which, when openly delivered via intranet, democratizes access to geographic and social network data. Geospatial intelligence, based on GIS spatial analysis, has also become a key element for security. GIS as a whole can be described as conversion to a vectorial representation or to any other digitisation process.

28

Page 29: Gis and remote sensings

Slope and aspect

Slope can be defined as the steepness or gradient of a unit of terrain, usually measured as an angle in degrees or as a percentage. Aspect can be defined as the direction in which a unit of terrain faces. Aspect is usually expressed in degrees from north. Slope, aspect, and surface curvature in terrain analysis are all derived from neighborhood operations using elevation values of a cell's adjacent neighbours. Slope is a function of resolution, and the spatial resolution used to calculate slope and aspect should always be specified. Authors such as Skidmore, Jones and Zhou and Liu have compared techniques for calculating slope and aspect.

The following method can be used to derive slope and aspect:

The elevation at a point or unit of terrain will have perpendicular tangents (slope) passing through the point, in an east-west and north-south direction. These two tangents give two components, ∂z/∂x and ∂z/∂y, which then be used to determine the overall direction of slope, and the aspect of the slope. The gradient is defined as a vector quantity with components equal to the partial derivatives of the surface in the x and y directions.[27]

The calculation of the overall 3x3 grid slope S and aspect A for methods that determine east-west and north-south component use the following formulas respectively:

Zhou and Liu describe another algorithm for calculating aspect, as follows:

Data analysis

It is difficult to relate wetlands maps to rainfall amounts recorded at different points such as airports, television stations, and schools. A GIS, however, can be used to depict two- and three-dimensional characteristics of the Earth's surface, subsurface, and atmosphere from information points. For example, a GIS can quickly generate a map with isopleth or contour lines that indicate differing amounts of rainfall. Such a map can be thought of as a rainfall contour map. Many sophisticated methods can estimate the characteristics of surfaces from a limited number of point measurements. A two-dimensional contour map created from the surface modeling of rainfall point measurements may be overlaid and analyzed with any other map in a GIS covering the same area. This GIS derived map can then provide additional information - such as the viability of water power potential as a renewable

29

Page 30: Gis and remote sensings

energy source. Similarly, GIS can be used compare other renewable energy resources to find the best geographic potential for a region.

Additionally, from a series of three-dimensional points, or digital elevation model, isopleths lines representing elevation contours can be generated, along with slope analysis, shaded relief, and other elevation products. Watersheds can be easily defined for any given reach, by computing all of the areas contiguous and uphill from any given point of interest. Similarly, an expected thalweg of where surface water would want to travel in intermittent and permanent streams can be computed from elevation data in the GIS.

Topological modeling

A GIS can recognize and analyze the spatial relationships that exist within digitally stored spatial data. These topological relationships allow complex spatial modeling and analysis to be performed. Topological relationships between geometric entities traditionally include adjacency (what adjoins what), containment (what encloses what), and proximity (how close something is to something else).

Geometric Networks

Geometric networks are linear networks of objects that can be used to represent interconnected features, and to perform special spatial analysis on them. A geometric network is composed of edges, which are connected at junction points, similar to graphs in mathematics and computer science. Just like graphs, networks can have weight and flow assigned to its edges, which can be used to represent various interconnected features more accurately. Geometric networks are often used to model road networks and public utility networks, such as electric, gas, and water networks. Network modeling is also commonly employed in transportation planning, hydrology modeling, and infrastructure modeling.

Hydrological modeling

GIS hydrological models can provide a spatial element that other hydrological models lack, with the analysis of variables such as slope, aspect and watershed or catchment area. Terrain analysis is fundamental to hydrology, since water always flows down a slope. As basic terrain analysis of a digital elevation model (DEM) involves calculation of slope and aspect, DEMs are very useful for hydrological analysis. Slope and aspect can then be used to determine direction of surface runoff, and hence flow accumulation for the formation of streams, rivers and lakes. Areas of divergent flow can also give a clear indication of the boundaries of a catchment. Once a flow direction and accumulation matrix has been created, queries can be performed that show contributing or dispersal areas at a certain point. More detail can be added to the model, such as terrain roughness, vegetation types and soil types, which can influence infiltration and evapotranspiration rates, and hence influencing surface flow. One of the main uses of hydrological modeling is in environmental contamination research.

Cartographic modeling

The term "cartographic modeling" was probably coined by Dana Tomlin in his PhD dissertation and later in his book which has the term in the title. Cartographic modeling 30

Page 31: Gis and remote sensings

refers to a process where several thematic layers of the same area are produced, processed, and analyzed. Tomlin used raster layers, but the overlay method (see below) can be used more generally. Operations on map layers can be combined into algorithms, and eventually into simulation or optimization models.

Map overlay

The combination of several spatial datasets (points, lines, or polygons) creates a new output vector dataset, visually similar to stacking several maps of the same region. These overlays are similar to mathematical Venn diagram overlays. A union overlay combines the geographic features and attribute tables of both inputs into a single new output. An intersect overlay defines the area where both inputs overlap and retains a set of attribute fields for each. A symmetric difference overlay defines an output area that includes the total area of both inputs except for the overlapping area.

Data extraction is a GIS process similar to vector overlay, though it can be used in either vector or raster data analysis. Rather than combining the properties and features of both datasets, data extraction involves using a "clip" or "mask" to extract the features of one data set that fall within the spatial extent of another dataset.

In raster data analysis, the overlay of datasets is accomplished through a process known as "local operation on multiple rasters" or "map algebra," through a function that combines the values of each raster's matrix. This function may weigh some inputs more than others through use of an "index model" that reflects the influence of various factors upon a geographic phenomenon.

Geostatistics

Geostatistics is a branch of statistics that deals with field data, spatial data with a continuous index. It provides methods to model spatial correlation, and predict values at arbitrary locations (interpolation).

When phenomena are measured, the observation methods dictate the accuracy of any subsequent analysis. Due to the nature of the data (e.g. traffic patterns in an urban environment; weather patterns over the Pacific Ocean), a constant or dynamic degree of precision is always lost in the measurement. This loss of precision is determined from the scale and distribution of the data collection.

To determine the statistical relevance of the analysis, an average is determined so that points (gradients) outside of any immediate measurement can be included to determine their predicted behavior. This is due to the limitations of the applied statistic and data collection methods, and interpolation is required to predict the behavior of particles, points, and locations that are not directly measurable.

Interpolation is the process by which a surface is created, usually a raster dataset, through the input of data collected at a number of sample points. There are several forms of interpolation, each which treats the data differently, depending on the properties of the data set. In comparing interpolation methods, the first consideration should be whether or not the source data will change (exact or approximate). Next is whether the method is subjective, a human interpretation, or objective. Then there is the nature of transitions 31

Page 32: Gis and remote sensings

between points: are they abrupt or gradual. Finally, there is whether a method is global (it uses the entire data set to form the model), or local where an algorithm is repeated for a small section of terrain.

Interpolation is a justified measurement because of a spatial autocorrelation principle that recognizes that data collected at any position will have a great similarity to, or influence of those locations within its immediate vicinity.

Digital elevation models, triangulated irregular networks, edge-finding algorithms, Thiessen polygons, Fourier analysis, (weighted) moving averages, inverse distance weighting, kriging, spline, and trend surface analysis are all mathematical methods to produce interpolative data.

Address geocoding

Geocoding is interpolating spatial locations (X,Y coordinates) from street addresses or any other spatially referenced data such as ZIP   Codes , parcel lots and address locations. A reference theme is required to geocode individual addresses, such as a road centerline file with address ranges. The individual address locations have historically been interpolated, or estimated, by examining address ranges along a road segment. These are usually provided in the form of a table or database. The software will then place a dot approximately where that address belongs along the segment of centerline. For example, an address point of 500 will be at the midpoint of a line segment that starts with address 1 and ends with address 1,000. Geocoding can also be applied against actual parcel data, typically from municipal tax maps. In this case, the result of the geocoding will be an actually positioned space as opposed to an interpolated point. This approach is being increasingly used to provide more precise location information.

Reverse geocoding

Reverse geocoding is the process of returning an estimated street address number as it relates to a given coordinate. For example, a user can click on a road centerline theme (thus providing a coordinate) and have information returned that reflects the estimated house number. This house number is interpolated from a range assigned to that road segment. If the user clicks at the midpoint of a segment that starts with address 1 and ends with 100, the returned value will be somewhere near 50. Note that reverse geocoding does not return actual addresses, only estimates of what should be there based on the predetermined range.

Multi-criteria decision analysis

Coupled with GIS, multi-criteria decision analysis methods support decision-makers in analysing a set of alternative spatial solutions, such as the most likely ecological habitat for restoration, against multiple criteria, such as vegetation cover or roads. MCDA uses decision rules to aggregate the criteria, which allows the alternative solutions to be ranked or prioritized. GIS MCDA may reduce costs and time involved in identifying potential restoration sites.

32

Page 33: Gis and remote sensings

Data output and cartography

Cartography is the design and production of maps, or visual representations of spatial data. The vast majority of modern cartography is done with the help of computers, usually using GIS but production of quality cartography is also achieved by importing layers into a design program to refine it. Most GIS software gives the user substantial control over the appearance of the data.

Cartographic work serves two major functions:

First, it produces graphics on the screen or on paper that convey the results of analysis to the people who make decisions about resources. Wall maps and other graphics can be generated, allowing the viewer to visualize and thereby understand the results of analyses or simulations of potential events. Web Map Servers facilitate distribution of generated maps through web browsers using various implementations of web-based application programming interfaces (AJAX, Java, Flash, etc.).

Second, other database information can be generated for further analysis or use. An example would be a list of all addresses within one mile (1.6 km) of a toxic spill.

Graphic display techniques

Traditional maps are abstractions of the real world, a sampling of important elements portrayed on a sheet of paper with symbols to represent physical objects. People who use maps must interpret these symbols. Topographic maps show the shape of land surface with contour lines or with shaded relief.

Today, graphic display techniques such as shading based on altitude in a GIS can make relationships among map elements visible, heightening one's ability to extract and analyze information. For example, two types of data were combined in a GIS to produce a perspective view of a portion of San   Mateo County , California.

The digital elevation model, consisting of surface elevations recorded on a 30-meter horizontal grid, shows high elevations as white and low elevation as black.

The accompanying Landsat Thematic Mapper image shows a false-color infrared image looking down at the same area in 30-meter pixels, or picture elements, for the same coordinate points, pixel by pixel, as the elevation information.

A GIS was used to register and combine the two images to render the three-dimensional perspective view looking down the San Andreas Fault, using the Thematic Mapper image pixels, but shaded using the elevation of the landforms. The GIS display depends on the viewing point of the observer and time of day of the display, to properly render the shadows created by the sun's rays at that latitude, longitude, and time of day.

An archeochrome is a new way of displaying spatial data. It is a thematic on a 3D map that is applied to a specific building or a part of a building. It is suited to the visual display of heat-loss data.

33

Page 34: Gis and remote sensings

Spatial ETL

Spatial ETL tools provide the data processing functionality of traditional Extract, Transform, Load (ETL) software, but with a primary focus on the ability to manage spatial data. They provide GIS users with the ability to translate data between different standards and proprietary formats, whilst geometrically transforming the data en route. These tools can come in the form of add-ins to existing wider-purpose software such asMicrosoft Excel.

GIS data mining

GIS or spatial data mining is the application of data mining methods to spatial data. Data mining, which is the partially automated search for hidden patterns in large databases, offers great potential benefits for applied GIS-based decision making. Typical applications including environmental monitoring. A characteristic of such applications is that spatial correlation between data measurements require the use of specialized algorithms for more efficient data analysis.

GIS Developments

Many disciplines can benefit from GIS technology. An active GIS market has resulted in lower costs and continual improvements in the hardware and software components of GIS. These developments will, in turn, result in a much wider use of the technology throughout science, government, business, and industry, with applications including real estate, public health, crime mapping, national defense, sustainable development, natural resources, landscape architecture, archaeology, regional and community planning, transportation and logistics. GIS is also diverging into location-based services, which allows GPS-enabled mobile devices to display their location in relation to fixed assets (nearest restaurant, gas station, fire hydrant), mobile assets (friends, children, police car) or to relay their position back to a central server for display or other processing. These services continue to develop with the increased integration of GPS functionality with increasingly powerful mobile electronics (cell phones, PDAs, laptops).

Open Geospatial Consortium standards

The Open Geospatial Consortium (OGC) is an international industry consortium of 384 companies, government agencies, universities, and individuals participating in a consensus process to develop publicly available geoprocessing specifications. Open interfaces and protocols defined by Open GIS Specifications support interoperable solutions that "geo-enable" the Web, wireless and location-based services, and mainstream IT, and empower technology developers to make complex spatial information and services accessible and useful with all kinds of applications. Open Geospatial Consortium protocols include Web Map Service, and Web Feature Service.

GIS products are broken down by the OGC into two categories, based on how completely and accurately the software follows the OGC specifications.

Compliant Products are software products that comply to OGC's Open GIS Specifications. When a product has been tested and certified as compliant through the OGC Testing Program, the product is automatically registered as "compliant" on this site.

34

Page 35: Gis and remote sensings

Implementing Products are software products that implement OpenGIS Specifications but have not yet passed a compliance test. Compliance tests are not available for all specifications. Developers can register their products as implementing draft or approved specifications, though OGC reserves the right to review and verify each entry.

Web mapping

In recent years there has been an explosion of mapping applications on the web such as Google Maps and Bing Maps. These websites give the public access to huge amounts of geographic data.

Some of them, like Google Maps and OpenLayers, expose an API that enable users to create custom applications. These toolkits commonly offer street maps, aerial/satellite imagery, geo coding, searches, and routing functionality. Web mapping has also uncovered the potential of crowd sourcing geo data in projects like Open Street Map, which is a collaborative project to create a free editable map of the world.

Global climate change, climate history program and prediction of its impact

Maps have traditionally been used to explore the Earth and to exploit its resources. GIS technology, as an expansion of cartographic science, has enhanced the efficiency and analytic power of traditional mapping. Now, as the scientific community recognizes the environmental consequences of anthropogenic activities influencing climate change, GIS technology is becoming an essential tool to understand the impacts of this change over time. GIS enables the combination of various sources of data with existing maps and up-to-date information from earth observation satellites along with the outputs of climate change models. This can help in understanding the effects of climate change on complex natural systems. One of the classic examples of this is the study of Arctic ice melting.

Adding the dimension of time

The condition of the Earth's surface, atmosphere, and subsurface can be examined by feeding satellite data into a GIS. GIS technology gives researchers the ability to examine the variations in Earth processes over days, months, and years. As an example, the changes in vegetation vigor through a growing season can be animated to determine when drought was most extensive in a particular region. The resulting graphic, known as a normalized vegetation index, represents a rough measure of plant health. Working with two variables over time would then allow researchers to detect regional differences in the lag between a decline in rainfall and its effect on vegetation.

GIS technology and the availability of digital data on regional and global scales enable such analyses. The satellite sensor output used to generate a vegetation graphic is produced for example by the Advanced Very High Resolution Radiometer (AVHRR). This sensor system detects the amounts of energy reflected from the Earth's surface across various bands of the spectrum for surface areas of about 1 square kilometer. The satellite sensor produces images of a particular location on the Earth twice a day. AVHRR and more recently the Moderate-Resolution Imaging Spectroradiometer (MODIS) are only two of many sensor systems used for Earth surface analysis. More sensors will follow, generating ever greater amounts of data.35

Page 36: Gis and remote sensings

In addition to the integration of time in environmental studies, GIS is also being explored for its ability to track and model the progress of humans throughout their daily routines. A concrete example of progress in this area is the recent release of time-specific population data by the U.S. Census. In this data set, the populations of cities are shown for daytime and evening hours highlighting the pattern of concentration and dispersion generated by North American commuting patterns. The manipulation and generation of data required to produce this data would not have been possible without GIS.

Using models to project the data held by a GIS forward in time have enabled planners to test policy decisions using spatial decision support systems.

CONCEPTS

MAPS AS A MODEL OF REALITY

The real world is too complex and unmanageable for direct analysis and understanding because of its countless variability and diversity.  It would be an impossible task to describe and locate each city, building, tree, blade of grass, and grain of sand.  How do we reduce the complexity of the Earth and its inhabitants, so we can portray them in a GIS database and on a map?  We do it by selecting the most relevant features (ignoring those we do not think are necessary for our specific research or project) and then generalizing the features we have selected.  Chapter 6, as well as later portions of this chapter, covers the selection and generalization process in more detail.  For now, let’s focus on features.

FEATURES

As described in Definition #2 (and Figure 1.2), conceptually, there are two parts of a GIS: a spatial or map component and an attribute or database component.  Features have these two components as well.  They are represented spatially on the map and their attributes, describing the features, are found in a data file.  These two parts are linked.  In other words, each map feature is linked to a record in a data file that describes the feature.  If you delete the feature’s attributes in the data file, the feature disappears on the map.  Conversely, if you delete the feature from the map, its attributes will disappear too.

Features are individual objects and events that are located (present, past or future) in space.  In Figure 1.2, a single parcel is an example of a feature.  Within the GIS industry, features have many synonyms including objects, events, activities, forms, observations, entities, and facilities.  Combined with other features of the same type (like all of the parcels in Figure 1.2), they are arranged in data files often called layers, coverages, or themes.  In this text, we use the terms feature and layer.

In Figure 1.4 below, three features—parcels, buildings, and street centerlines—of a typical city block are visible.  Every feature has a spatial location and a set of attributes.  Its spatial location describes not only its location but its extent.  While “location” may be simple to grasp, it is difficult to locate features accurately and precisely.  Accuracy and precision are

36

Page 37: Gis and remote sensings

examined in Chapter 2, but, in brief, precision deals with the exactness of the measurement.  For example, some input devices, like GPS, have a certain error.  They may be precise within a certain accuracy range if used correctly.  Accuracy is the degree of correspondence between the data and the real world.

Besides location, each feature usually has a set of descriptive attributes, which characterize the individual feature.  Each attribute takes the form of numbers or text (characters), and these values can be qualitative (i.e. low, medium, or high income) or quantitative (actual measurements).  Sometimes, features may also have a temporal dimension; a period in which the feature’s spatial or attribute data may change.

As an example of a feature, think of a streetlight.  Now imagine a map with the locations of all the streetlights in your neighborhood.  In Figure 1.5, streetlights most are depicted as small circles.  Now think of all of the different characteristics that you could collect relating to each streetlight.  It could be a long list.  Streetlight attributes could include height, material, basement material, presence of a light globe, globe material, color of pole, style, wattage and lumens of bulb, bulb type, bulb color, date of installation, maintenance report, and many others. The necessary streetlight attributes depends on how you intend to use them.  For example, if you are solely interested in knowing the location of streetlights for personal safety reasons, you need to know location, pole heights, and bulb strength.  On the other hand, if you are interested in historic preservation, you are concerned with the streetlight’s location, style, and color.

Now continue thinking about feature attributes, by imagining the trees planted around your campus or office.  What attributes would a gardener want versus a botanist?  There would be differences because they have different needs.  You determine your study’s features and the attributes that define the features.

POINTS, LINES AND POLYGONS

Now think of the feature’s shape on a map.  Single or multiple paired coordinates (x, y) locate individual features in space and define their unique shape.  The x and y values of each coordinate pair are associated with real world coordinate systems, which are discussed in Chapter 3.  For now, let’s focus on the shape of features, which take the generalized form of points, lines, and polygons

PointsPoints are zero dimensional features (meaning that they possess only one x, y coordinate set) whose location is depicted by a small symbol.  What you represent as a point depends on your study.  Examples include streetlights, individual trees, wells, car accidents, crimes, telephone polls, earthquake epicenters, and even, depending on scale, buildings and cities.

LinesLines are formed from a sequence of at least two paired coordinates.  The first pair starts

37

Page 38: Gis and remote sensings

the line and the last ends it.  Two coordinate pairs form a straight line.  Additional paired coordinates can form vertices between the starting and ending points that allow the line to bend and curve.  Having length (which can be measured) but no width, a line feature is one-dimensional.  Again, what is represented as a line depends on your study, but street centerlines, utility lines, canals, railroad tracks, rivers, flight paths, and elevation contour lines usually form lines.

PolygonsPolygons are features that have boundaries.  Formed by a sequence of paired coordinates, polygons differ from lines in that the starting point is also its ending point.  This provides polygons with both length and width, so these two-dimensional features can calculate the area contained within the feature.  What is represented as a polygon differs from study to study, but examples include lakes, forest stands, buildings, counties, countries, states, and census districts.

TOPOLOGY

One of the most important concepts associated with GIS and other geotechnologies is topology.  As features are added to a GIS, they form spatial relationships—called topology—with each other (both with features within the same layer and with features in different layers).  You might find topology a confusing term partly because it has both spatial and mathematical properties.  For our purposes, you can define it as the spatial relationships among features.  It deals with where features are in relation to one another and how they are related to one another.  These relationships take the form of simple distance calculations from one feature to another, but also include the more complicated issues of adjacency and connectivity.

1. Distances between features.  The geographer Waldo Tobler created what some call the “first law of geography”, which states, “Everything is related to everything else, but near things are more related than distant things.” (1970, 236).  This type of topology looks at the spatial relationships of where features are located.  Consider the spatial locations of streets, bike lanes, sidewalks, and streetlights.  They are positioned to work together.  This is a type of topology; a relationship exists.  Notice the relationship between the fire hydrant, building, and street in Figure 1.7.

2. Adjacency.  Adjacency focuses on a single type of feature (like streets or buildings) and whether parts of two or more individual features are shared (or contained).  Think of an individual street segment, and how it is most likely physically connected to at least one additional street segment at one or both of its ends.  These adjacent street segments are in turn connected to additional segments, which in turn are connected to streets, forming a network.  When a single point or line (like a boundary between two parcels) is shared by at least two features, the spatial data file stores only a single point or a single line to prevent duplication that could lead to errors.  This topological relationship describes how features are related.

38

Page 39: Gis and remote sensings

3. Connectivity.  Also focusing on how features are related, connectivity specifies the way features are linked in a network.  Even though a couple street segments may be physically connected in space, that does not mean that traffic can go in both directions.  These are topological relationships that you can specify.  Differing from adjacency, connectivity can include multiple feature types.  For instance, you can determine the flow of water through connected pipe and valve features.

DATA MODEL

Current GIS programs represent points, lines, and polygons differently.  There are two fundamental models: raster and vector.  Each model has its advantages and disadvantages, and neither is superior to the other in every situation.  One data model may fit certain types of data and applications better than the other.

RasterA matrix of rows and columns, the raster data model covers sections of the Earth’s surface and represents features with cells or pixels.  Pixels are the building blocks of the raster data model, and they are usually uniformly square and of consistent size within each layer.  Each pixel represents a precise chunk of the Earth’s surface; the geographic position of any cell can be determined.  A specific attribute value, representing the condition of that specific portion of the Earth’s surface (see figure 1.8), is associated with the pixel.  If you need more than one attribute to describe the area contained within the pixel (and most likely you will), you need a second layer.  The second raster layer gives you a second attribute.  A third gives you a third attribute, and so on.

Individual cells and groups of cells represent the features of the real world (Figure 1.8).  A point feature usually fills one cell while lines and polygons are constructed as a string or contiguous group of cells.  Raster layers fill space; they describe what occurs everywhere in the study area.  There are no blank spaces across the layer.  “Empty” areas simply get a “0” value, but every pixel gets a value.

Conceptually, the raster model is simple.  You take a portion of the Earth’s surface, divide it into cells, and give each cell an attribute that represents that area.  In the figure above, you might give each cell either a D (developed), P (park), or W (water).  For those cells with both park and water, you can give these cells either another code PW (for park and water) or make a judgment as to what covers the majority of the cell.  Another way to code these cells is with the percentage of the cell that is water.  If 40 percent of the cell is covered by water, the cell gets a value of 40.

VectorThe vector data model uses discrete point and line segments to identify the locations of the Earth’s features.  Vector objects usually do not fill space like raster layers do; they depict where features occur and the space around those features is empty.  Notice that there are white spaces in the vector model of Figure 1.8.  No white spaces exist in the raster model; it covers the entire area.39

Page 40: Gis and remote sensings

Vector features are located with x, y coordinates.  As described above, points are easy; they have one node (sometimes called a vertex).  A node is a location in space that helps define the shape of point, line, and polygon features.  As mentioned above, points have one coordinate pair that locates the feature in space.  Lines have at least two nodes (their end points).  Polygons have a minimum of three nodes to form an area.  Lines and polygons usually have many more nodes that help define the course of the line or the polygon’s area.

Contrasting with raster systems that record one attribute per layer, the vector data model can handle many attributes for each feature type.  Different software programs have varying ways of organizing vector digital files, but usually they have at least two files: one that stores spatial data and another that stores attributes.

The link between the spatial and attribute data files is made with a unique identifier.  Each feature on the map and its corresponding attributes has a unique identifier that links the map feature to its database attributes.  A type of unique identifier, a “key”, which links attribute files, is discussed in Chapter 3.

Raster versus Vector.Which is better?  Although GIS users have their own personal favorite data model, the question of which is “better” is an incomplete question.  There are advantages and disadvantages to both data models, so a better question is which is better for particular applications or datasets.  Some in the GIS industry use the slogan “Raster is faster, but vector is corrector.”  While this is a good starting point, it conceals the details.  Yes, your computer can process raster data quicker, but today computer processors are so fast the difference may be negligible.  Yes, vector output looks more accurate, but you can increase pixel resolution to something resembling vector resolution (this, however, greatly increases the database size).  The following are some of the advantages and disadvantages of the data models:

Raster advantages:

1. Easy to understand.  Conceptually, the raster data model is easy to understand.  It arranges data into columns and rows. Each pixel represents a piece of territory.

2. Processing speed.  Raster’s simple data structure and its uncomplicated math produce quick results.  For example, to calculate a polygon’s area, the computer takes the area contained within a single cell (which remains consistent throughout the layer) and multiples it by the number of cells making up the polygon.  Likewise, the speed of many analysis processes, like overlay and buffering, are faster than vector systems that must use geometric equations.

3. Data form.  Remote sensing imagery is easily handled by raster-based systems because the imagery is provided in a raster format.

4. Some analysis functions (surface analysis and neighborhood functions) are only feasible in raster systems.  In addition, many new analysis functions appear in raster systems before migrating to vector systems because the math is simpler.

40

Page 41: Gis and remote sensings

Raster disadvantages:

1. Appearance.  Cells “seem” to sacrifice too much detail (Figure 1.9).  This disadvantage is largely aesthetic and can be remedied by increasing the layer’s resolution.

1. Accuracy.  Sometimes accuracy is a problem due to the pixel resolution.  Imagine if you had a raster layer with a 30 by 30 meter resolution, and you wanted to locate traffic stop signs in that layer.  The entire 30 by 30 meter pixel would represent the single stop sign.  If you converted this raster layer to vector, it might place the stop sign at what was the pixel’s center.  Sometimes problems of accuracy (and appearance) can be resolved by selecting a smaller pixel resolution, but this has database consequences.

2. Large database.  As just described, accuracy and appearance can be enhanced by reducing pixel size (the area of the Earth’s surface covered by each cell), but this increases your layer’s file size.  By making the resolution 50 percent better (say from 30 to 15 meters), your layer grows four times.  Improve the resolution again by halving the pixel size (to 7.5 meters) and your layer will again increase by four times (16 times larger than the original 30-meter layer).  The layer quadruples because the resolution increases in both the x and y direction.

Vector advantages:

1. Intuitive.  In our minds, we picture features discreetly rather than made up of contiguous square cells.

2. Resolution.  If the locations of features are precise and accurate, you can maintain that spatial accuracy.  The features will not float somewhere within a cell.

3. Topology.  Although the raster data model preserves where features are located in relation to one another, they do not represent how they are related to one another.  This complex form of topology can be constructed in most vector systems, so you can track the connections in a municipal water network between pipe and valve features and thus track the direction and flow of water.

4. Storage.  Vector points, lines, and simple polygons use little disk space in comparison to raster systems.  This was once a major consideration when hard-disk storage was limited and expensive.

Vector disadvantages:

1. Geometry is complex.  The geometrical algorithms needed for polygon overlay and the calculation of distances, depending on the projection/coordinate system used, require experienced programmers.  This is not usually a problem for most GIS users since most functions are directly coded in the software.

2. Slow response times.  The vector data model can be slow to process complex datasets especially on low-end computers.

41

Page 42: Gis and remote sensings

3. Less innovation.  Since the math is more complex, new analysis functions may not surface on vector systems for a couple of years after they have debuted on raster system.

GIS DATA TYPES

The basic data type in a GIS reflects traditional data found on a map. Accordingly, GIS technology utilizes two basic types of data. These are:

Spatial datadescribes the absolute and relative location of geographic features.

Attribute data describes characteristics of the spatial features. These characteristics can be quantitative and/or qualitative in nature. Attribute data is often referred to as tabular data.

The coordinate location of a forestry stand would be spatial data, while the characteristics of that forestry stand, e.g. cover group, dominant species, crown closure, height, etc., would be attribute data. Other data types, in particular image and multimedia data, are becoming more prevalent with changing technology. Depending on the specific content of the data, image data may be considered either spatial, e.g. photographs, animation, movies, etc., or attribute, e.g. sound, descriptions, narration's, etc.

SPATIAL DATA MODELS

Traditionally spatial data has been stored and presented in the form of a map. Three basic types of spatial data models have evolved for storing geographic data digitally. These are referred to as:

Vector;

Raster;

Image.

The following diagram reflects the two primary spatial data encoding techniques. These are vector and raster. Image data utilizes techniques very similar to raster data, however

42

Page 43: Gis and remote sensings

typically lacks the internal formats required for analysis and modeling of the data. Images reflect pictures or photographs of the landscape.

Representation of the real world and showing differences in how a vector and a raster GIS will represent this real world.

ATTRIBUTE DATA MODELS

A separate data model is used to store and maintain attribute data for GIS software. These data models may exist internally within the GIS software, or may be reflected in external commercial Database Management Software (DBMS). A variety of different data models exist for the storage and management of attribute data. The most common are:

Tabular

Hierarchial

Network

Relational

Object Oriented

43

Page 44: Gis and remote sensings

The tabular model is the manner in which most early GIS software packages stored their attribute data. The next three models are those most commonly implemented in database management systems (DBMS). The object oriented is newer but rapidly gaining in popularity for some applications. A brief review of each model is provided.

Tabular Model

The simple tabular model stores attribute data as sequential data files with fixed formats (or comma delimited for ASCII data), for the location of attribute values in a predefined record structure. This type of data model is outdated in the GIS arena. It lacks any method of checking data integrity, as well as being inefficient with respect to data storage, e.g. limited indexing capability for attributes or records, etc.

Hierarchical Model

The hierarchical database organizes data in a tree structure. Data is structured downward in a hierarchy of tables. Any level in the hierarchy can have unlimited children, but any child can have only one parent. Hierarchial DBMS have not gained any noticeable acceptance for use within GIS. They are oriented for data sets that are very stable, where primary relationships among the data change infrequently or never at all. Also, the limitation on the number of parents that an element may have is not always conducive to actual geographic phenomenon.

Network Model

The network database organizes data in a network or plex structure. Any column in a plex structure can be linked to any other. Like a tree structure, a plex structure can be described in terms of parents and children. This model allows for children to have more than one parent.

Network DBMS have not found much more acceptance in GIS than the hierarchical DBMS. They have the same flexibility limitations as hierarchical databases; however, the more powerful structure for representing data relationships allows a more realistic modeling of geographic phenomenon. However, network databases tend to become overly complex too easily. In this regard it is easy to lose control and understanding of the relationships between elements.

Relational Model

The relational database organizes data in tables. Each table, is identified by a unique table name, and is organized by rows and columns. Each column within a table also has a unique name. Columns store the values for a specific attribute, e.g. cover group, tree height. Rows represent one record in the table. In a GIS each row is usually linked to a separate spatial feature, e.g. a forestry stand. Accordingly, each row would be comprised of several columns, 44

Page 45: Gis and remote sensings

each column containing a specific value for that geographic feature. The following figure presents a sample table for forest inventory features. This table has 4 rows and 5 columns. The forest stand number would be the label for the spatial feature as well as the primary key for the database table. This serves as the linkage between the spatial definition of the feature and the attribute data for the feature.

UNIQUE STAND NUMBER

DOMINANT COVER GROUP

AVG. TREE HEIGHT

STAND SITE INDEX

STAND AGE

001 DEC 3 G 100

002 DEC-CON 4 M 80

003 DEC-CON 4 M 60

004 CON 4 G 120

Data is often stored in several tables. Tables can be joined or referenced to each other by common columns (relational fields). Usually the common column is an identification number for a selected geographic feature, e.g. a forestry stand polygon number. This identification number acts as the primary key for the table. The ability to join tables through use of a common column is the essence of the relational model. Such relational joins are usually ad hoc in nature and form the basis of for querying in a relational GIS product. Unlike the other previously discussed database types, relationships are implicit in the character of the data as opposed to explicit characteristics of the database set up.

The relational database model is the most widely accepted for managing the attributes of geographic data.

There are many different designs of DBMSs, but in GIS the relational design has been the most useful. In the relational design, data are stored conceptually as a collection of tables. Common fields in different tables are used to link them together. This surprisingly simple design has been so widely used primarily because of its flexibility and very wide deployment in applications both within and without GIS.

45

Page 46: Gis and remote sensings

In the relational design, data are stored conceptually as a collection of tables. Common fields in different tables are used to link them together.

In fact, most GIS software provides an internal relational data model, as well as support for commercial off-the-shelf (COTS) relational DBMS'. COTS DBMS' are referred to as external DBMS'. This approach supports users with small data sets, where an internal data model is sufficient, and customers with larger data sets who utilize a DBMS for other corporate data storage requirements. With an external DBMS the GIS software can simply connect to the database, and the user can make use of the inherent capabilities of the DBMS. External DBMS' tend to have much more extensive querying and data integrity capabilities than the GIS' internal relational model. The emergence and use of the external DBMS is a trend that has resulted in the proliferation of GIS technology into more traditional data processing environments.

The relational DBMS is attractive because of its:

simplicity in organization and data modelling.

flexibility - data can be manipulated in an ad hoc manner by joining tables.

efficiency of storage - by the proper design of data tables redundant data can be

46

Page 47: Gis and remote sensings

minimized; and

the non-procedural nature - queries on a relational database do not need to take into account the internal organization of the data.

The relational DBMS has emerged as the dominant commercial data management tool in GIS implementation and application.

The following diagram illustrates the basic linkage between a vector spatial data (topologic model) and attributes maintained in a relational database file.

Basic linkages between a vector spatial data (topologic model) and attributes maintained in a relational database file (From Berry)

Object-Oriented Model

The object-oriented database model manages data through objects. An object is a collection of data elements and operations that together are considered a single entity. The object-oriented database is a relatively new model. This approach has the attraction that querying is very natural, as features can be bundled together with attributes at the database administrator's discretion. To date, only a few GIS packages are promoting the use of this attribute data model. However, initial impressions indicate that this approach may hold many operational benefits with respect to geographic data processing. Fulfilment of this promise with a commercial GIS product remains to be seen.

47

Page 48: Gis and remote sensings

Data Structures

VECTOR DATA FORMATS

All spatial data models are approaches for storing the spatial location of geographic features in a database. Vector storage implies the use of vectors (directional lines) to represent a geographic feature. Vector data is characterized by the use of sequential points or vertices to define a linear segment. Each vertex consists of an X coordinate and a Y coordinate.

Vector lines are often referred to as arcs and consist of a string of vertices terminated by a node. A node is defined as a vertex that starts or ends an arc segment. Point features are defined by one coordinate pair, a vertex. Polygonal features are defined by a set of closed coordinate pairs. In vector representation, the storage of the vertices for each feature is important, as well as the connectivity between features, e.g. the sharing of common vertices where features connect.

Several different vector data models exist, however only two are commonly used in GIS data storage.

The most popular method of retaining spatial relationships among features is to explicitly record adjacency information in what is known as the topologic data model. Topology is a mathematical concept that has its basis in the principles of feature adjacency and connectivity.

The topologic data structure is often referred to as an intelligent data structure because spatial relationships between geographic features are easily derived when using them. Primarily for this reason the topologic model is the dominant vector data structure currently used in GIS technology. Many of the complex data analysis functions cannot effectively be undertaken without a topologic vector data structure. Topology is reviewed in greater detail later on in the book.

The secondary vector data structure that is common among GIS software is the computer-aided drafting (CAD) data structure. This structure consists of listing elements, not features, defined by strings of vertices, to define geographic features, e.g. points, lines, or areas. There is considerable redundancy with this data model since the boundary segment between two polygons can be stored twice, once for each feature. The CAD structure emerged from the development of computer graphics systems without specific considerations of processing geographic features. Accordingly, since features, e.g. polygons, are self-contained and independent, questions about the adjacency of features can be difficult to answer. The CAD vector model lacks the definition of spatial relationships between features that is defined by the topologic data model.

48

Page 49: Gis and remote sensings

RASTER DATA FORMATS

Raster data models incorporate the use of a grid-cell data structure where the geographic area is divided into cells identified by row and column. This data structure is commonly called raster. While the term raster implies a regularly spaced grid other tessellated data structures do exist in grid based GIS systems. In particular, the quadtree data structure has found some acceptance as an alternative raster data model.

The size of cells in a tessellated data structure is selected on the basis of the data accuracy and the resolution needed by the user. There is no explicit coding of geographic coordinates required since that is implicit in the layout of the cells. A raster data structure is in fact a matrix where any coordinate can be quickly calculated if the origin point is known, and the size of the grid cells is known. Since grid-cells can be handled as two-dimensional arrays in computer encoding many analytical operations are easy to program. This makes tessellated data structures a popular choice for many GIS software. Topology is not a relevant concept with tessellated structures since adjacency and connectivity are implicit in the location of a particular cell in the data matrix.

Several tessellated data structures exist, however only two are commonly used in GIS's. The most popular cell structure is the regularly spaced matrix or raster structure. This data structure involves a division of spatial data into regularly spaced cells. Each cell is of the same shape and size. Squares are most commonly utilized.

Since geographic data is rarely distinguished by regularly spaced shapes, cells must be classified as to the most common attribute for the cell. The problem of determining the proper resolution for a particular data layer can be a concern. If one selects too coarse a cell size then data may be overly generalized. If one selects too fine a cell size then too many cells may be created resulting in a large data volume, slower processing times, and a more cumbersome data set. As well, one can imply accuracy greater than that of the original data capture process and this may result in some erroneous results during analysis.

As well, since most data is captured in a vector format, e.g. digitizing, data must be converted to the raster data structure. This is called vector-raster conversion. Most GIS software allows the user to define the raster grid (cell) size for vector-raster conversion. It is imperative that the original scale, e.g. accuracy, of the data be known prior to conversion. The accuracy of the data, often referred to as the resolution, should determine the cell size of the output raster map during conversion.

Most raster based GIS software requires that the raster cell contain only a single discrete value. Accordingly, a data layer, e.g. forest inventory stands, may be broken down into a series of raster maps, each representing an attribute type, e.g. a species map, a height map, a density map, etc. These are often referred to as one attribute maps. This is in contrast to most conventional vector data models that maintain data as multiple attribute maps, e.g. forest inventory polygons linked to a database table containing all attributes as columns.

49

Page 50: Gis and remote sensings

This basic distinction of raster data storage provides the foundation for quantitative analysis techniques. This is often referred to as raster or map algebra. The use of raster data structures allow for sophisticated mathematical modelling processes while vector based systems are often constrained by the capabilities and language of a relational DBMS.

This difference is the major distinguishing factor between vector and raster based GIS software. It is also important to understand that the selection of a particular data structure can provide advantages during the analysis stage. For example, the vector data model does not handle continuous data, e.g. elevation, very well while the raster data model is more ideally suited for this type of analysis. Accordingly, the raster structure does not handle linear data analysis, e.g. shortest path, very well while vector systems do. It is important for the user to understand that there are certain advantages and disadvantages to each data model.

The selection of a particular data model, vector or raster, is dependent on the source and type of data, as well as the intended use of the data. Certain analytical procedures require raster data while others are better suited to vector data.

IMAGE DATA

Image data is most often used to represent graphic or pictorial data. The term image inherently reflects a graphic representation, and in the GIS world, differs significantly from raster data. Most often, image data is used to store remotely sensed imagery, e.g. satellite scenes or orthophotos, or ancillary graphics such as photographs, scanned plan documents, etc. Image data is typically used in GIS systems as background display data (if the image has been rectified and georeferenced); or as a graphic attribute. Remote sensing software makes use of image data for image classification and processing. Typically, this data must be converted into a raster format (and perhaps vector) to be used analytically with the GIS.

Image data is typically stored in a variety of de facto industry standard proprietary formats. These often reflect the most popular image processing systems. Other graphic image formats, such as TIFF, GIF, PCX, etc., are used to store ancillary image data. Most GIS software will read such formats and allow you to display this data.

VECTOR AND RASTER - ADVANTAGES AND DISADVANTAGES

There are several advantages and disadvantages for using either the vector or raster data model to store spatial data. These are summarized below.

Vector Data Advantages :Data can be represented at its original resolution and form

50

Page 51: Gis and remote sensings

without generalization.

Graphic output is usually more aesthetically pleasing (traditional cartographic representation);

Since most data, e.g. hard copy maps, is in vector form no data conversion is required.

Accurate geographic location of data is maintained.

Allows for efficient encoding of topology, and as a result more efficient operations that require topological information, e.g. proximity, network analysis.

Disadvantages:The location of each vertex needs to be stored explicitly.

For effective analysis, vector data must be converted into a topological structure. This is often processing intensive and usually requires extensive data cleaning. As well, topology is static, and any updating or editing of the vector data requires re-building of the topology.

Algorithms for manipulative and analysis functions are complex and may be processing intensive. Often, this inherently limits the functionality for large data sets, e.g. a large number of features.

Continuous data, such as elevation data, is not effectively represented in vector form. Usually substantial data generalization or interpolation is required for these data layers.

Spatial analysis and filtering within polygons is impossible

Raster Data Advantages :

51

Page 52: Gis and remote sensings

The geographic location of each cell is implied by its position in the cell matrix. Accordingly, other than an origin point, e.g. bottom left corner, no geographic coordinates are stored.

Due to the nature of the data storage technique data analysis is usually easy to program and quick to perform.

The inherent nature of raster maps, e.g. one attribute maps, is ideally suited for mathematical modeling and quantitative analysis.

Discrete data, e.g. forestry stands, is accommodated equally well as continuous data, e.g. elevation data, and facilitates the integrating of the two data types.

Grid-cell systems are very compatible with raster-based output devices, e.g. electrostatic plotters, graphic terminals.

Disadvantages:The cell size determines the resolution at which the data is represented.;

It is especially difficult to adequately represent linear features depending on the cell resolution. Accordingly, network linkages are difficult to establish.

Processing of associated attribute data may be cumbersome if large amounts of data exists. Raster maps inherently reflect only one attribute or characteristic for an area.

Since most input data is in vector form, data must undergo vector-to-raster conversion. Besides increased processing requirements this may introduce data integrity concerns due to generalization and choice of inappropriate cell size.

Most output maps from grid-cell systems do not conform to high-quality cartographic needs.

52

Page 53: Gis and remote sensings

It is often difficult to compare or rate GIS software that use different data models. Some personal computer (PC) packages utilize vector structures for data input, editing, and display but convert to raster structures for any analysis. Other more comprehensive GIS offerings provide both integrated raster and vector analysis techniques. They allow users to select the data structure appropriate for the analysis requirements. Integrated raster and vector processing capabilities are most desirable and provide the greatest flexibility for data manipulation and analysis.

DATA INPUT TECHNIQUES

Since the input of attribute data is usually quite simple, the discussion of data input techniques will be limited to spatial data only. There is no single method of entering the spatial data into a GIS. Rather, there are several, mutually compatible methods that can be used singly or in combination.

The choice of data input method is governed largely by the application, the available budget, and the type and the complexity of data being input.

There are at least four basic procedures for inputting spatial data into a GIS. These are:

Manual digitizing;

Automatic scanning;

Entry of coordinates using coordinate geometry; and the

Conversion of existing digital data.

Digitizing

While considerable work has been done with newer technologies, the overwhelming majority of GIS spatial data entry is done by manual digitizing. A digitizer is an electronic device consisting of a table upon which the map or drawing is placed. The user traces the spatial features with a hand-held magnetic pen, often called a mouse or cursor. While tracing the features the coordinates of selected points, e.g. vertices, are sent to the computer and stored. All points that are recorded are registered against positional control points, usually the map corners, that are keyed in by the user at the beginning of the digitizing session. The coordinates are recorded in a user defined coordinate system or map projection. Latitude and longitude and UTM is most often used. The ability to adjust or transform data during digitizing from one projection to another is a desirable function of the GIS software. Numerous functional techniques exist to aid the operator in the digitizing process.

53

Page 54: Gis and remote sensings

Digitizing can be done in a point mode, where single points are recorded one at a time, or in a stream mode, where a point is collected on regular intervals of time or distance, measured by an X and Y movement, e.g. every 3 metres. Digitizing can also be done blindly or with a graphics terminal. Blind digitizing infers that the graphic result is not immediately viewable to the person digitizing. Most systems display the digitized linework as it is being digitized on an accompanying graphics terminal.

Most GIS's use a spaghetti mode of digitizing. This allows the user to simply digitize lines by indicating a start point and an end point. Data can be captured in point or stream mode. However, some systems do allow the user to capture the data in an arc/node topological data structure. The arc/node data structure requires that the digitizer identify nodes.

Data capture in an arc/node approach helps to build a topologic data structure immediately. This lessens the amount of post processing required to clean and build the topological definitions. However, most often digitizing with an arc/node approach does not negate the requirement for editing and cleaning of the digitized linework before a complete topological structure can be obtained.

The building of topology is primarily a post-digitizing process that is commonly executed in batch mode after data has been cleaned. To date, only a few commercial vector GIS software offerings have successfully exhibited the capability to build topology interactively while the user digitizes.

Manual digitizing has many advantages. These include:

Low capital cost, e.g. digitizing tables are cheap;

Low cost of labour;

Flexibility and adaptability to different data types and sources;

Easily taught in a short amount of time - an easily mastered skill

Generally the quality of data is high;

Digitizing devices are very reliable and most often offer a greater precision that the data warrants; and

Ability to easily register and update existing data.

For raster based GIS software data is still commonly digitized in a vector format and converted to a raster structure after the building of a clean topological structure. The procedure usually differs minimally from vector based software digitizing, other than some

54

Page 55: Gis and remote sensings

raster systems allow the user to define the resolution size of the grid-cell. Conversion to the raster structure may occur on-the-fly or afterwards as a separate conversion process.

Automatic Scanning

A variety of scanning devices exist for the automatic capture of spatial data. While several different technical approaches exist in scanning technology, all have the advantage of being able to capture spatial features from a map at a rapid rate of speed. However, as of yet, scanning has not proven to be a viable alternative for most GIS implementation. Scanners are generally expensive to acquire and operate. As well, most scanning devices have limitations with respect to the capture of selected features, e.g. text and symbol recognition. Experience has shown that most scanned data requires a substantial amount of manual editing to create a clean data layer. Given these basic constraints some other practical limitations of scanners should be identified. These include :

hard copy maps are often unable to be removed to where a scanning device is available, e.g. most companies or agencies cannot afford their own scanning device and therefore must send their maps to a private firm for scanning;

hard copy data may not be in a form that is viable for effective scanning, e.g. maps are of poor quality, or are in poor condition;

geographic features may be too few on a single map to make it practical, cost-justifiable, to scan;

often on busy maps a scanner may be unable to distinguish the features to be captured from the surrounding graphic information, e.g. dense contours with labels;

with raster scanning there it is difficult to read unique labels (text) for a geographic feature effectively; and

scanning is much more expensive than manual digitizing, considering all the cost/performance issues.

Consensus within the GIS community indicates that scanners work best when the information on a map is kept very clean, very simple, and uncluttered with graphic symbology.

The sheer cost of scanning usually eliminates the possibility of using scanning methods for data capture in most GIS implementations. Large data capture shops and government agencies are those most likely to be using scanning technology.55

Page 56: Gis and remote sensings

Currently, general consensus is that the quality of data captured from scanning devices is not substantial enough to justify the cost of using scanning technology. However, major breakthroughs are being made in the field, with scanning techniques and with capabilities to automatically clean and prepare scanned data for topological encoding. These include a variety of line following and text recognition techniques. Users should be aware that this technology has great potential in the years to come, particularly for larger GIS installations.

Coordinate Geometry

A third technique for the input of spatial data involves the calculation and entry of coordinates using coordinate geometry (COGO) procedures. This involves entering, from survey data, the explicit measurement of features from some known monument. This input technique is obviously very costly and labour intensive. In fact, it is rarely used for natural resource applications in GIS. This method is useful for creating very precise cartographic definitions of property, and accordingly is more appropriate for land records management at the cadastral or municipal scale.

Conversion of Existing Digital Data

A fourth technique that is becoming increasingly popular for data input is the conversion of existing digital data. A variety of spatial data, including digital maps, are openly available from a wide range of government and private sources. The most common digital data to be used in a GIS is data from CAD systems. A number of data conversion programs exist, mostly from GIS software vendors, to transform data from CAD formats to a raster or topological GIS data format. Several ad hoc standards for data exchange have been established in the market place. These are supplemented by a number of government distribution formats that have been developed. Given the wide variety of data formats that exist, most GIS vendors have developed and provide data exchange/conversion software to go from their format to those considered common in the market place.

Most GIS software vendors also provide an ASCII data exchange format specific to their product, and a programming subroutine library that will allow users to write their own data conversion routines to fulfil their own specific needs. As digital data becomes more readily available this capability becomes a necessity for any GIS. Data conversion from existing digital data is not a problem for most technical persons in the GIS field. However, for smaller GIS installations who have limited access to a GIS analyst this can be a major stumbling block in getting a GIS operational. Government agencies are usually a good source for technical information on data conversion requirements.

Some of the data formats common to the GIS marketplace are listed below. Please note that most formats are only utilized for graphic data. Attribute data is usually handled as ASCII text files. Vendor names are supplied where appropriate.

56

Page 57: Gis and remote sensings

IGDS - Interactive Graphics Design Software (Intergraph / Microstation)

This binary format is a standard in the turnkey CAD market and has become a de facto standard in Canada's mapping industry. It is a proprietary format, however most GIS software vendors provide DGN translators.

DLG - Digital Line Graph (US Geological Survey)

This ASCII format is used by the USGS as a distribution standard and consequently is well utilized in the United States. It is not used very much in Canada even though most software vendors provide two way conversion to DLG.

DXF - Drawing Exchange Format (Autocad)

This ASCII format is used primarily to convert to/from the Autocad drawing format and is a standard in the engineering discipline. Most GIS software vendors provide a DXF translator.

GENERATE - ARC/INFO Graphic Exchange Format

A generic ASCII format for spatial data used by the ARC/INFO software to accommodate generic spatial data.

EXPORT - ARC/INFO Export Format .An exchange format that includes both graphic and attribute data. This format is intended for transferring ARC/INFO data from one hardware platform, or site, to another. It is also often used for archiving.

ARC/INFO data. This is not a published data format, however some GIS and desktop mapping vendors provide translators. EXPORT format can come in either uncompressed, partially compressed, or fully compressed format

A wide variety of other vendor specific data formats exist within the mapping and GIS industry. In particular, most GIS software vendors have their own proprietary formats. However, almost all provide data conversion to/from the above formats. As well, most GIS software vendors will develop data conversion programs dependant on specific requests by customers. Potential purchasers of commercial GIS packages should determine and clearly identify their data conversion needs, prior to purchase, to the software vendor.

57

Page 58: Gis and remote sensings

2. GIS Operations and Functions

a. Data Inputb. Data Storagec. Data Manipulation and Processingd. Data Output

a. Data Input

Data input covers the range of operations by which spatial data from maps, remote sensors, and other sources are transformed into a digital format. Among the different devices commonly used for this operation are keyboards, digitizers, scanners, CCTS, and interactive terminals or visual display units (VDU). Given its relatively low cost, efficiency, and ease of operation, digitizing constitutes the best data input option for development planning purposes.

Two different types of data must be entered into the GIS: geographic references and attributes. Geographic reference data are the coordinates (either in terms of latitude and longitude or columns and rows) which give the location of the information being entered. Attribute data associate a numerical code to each cell or set of coordinates and for each variable, either to represent actual values (e.g., 200 mm of precipitation, 1,250 meters elevation) or to connote categorical data types (land uses, vegetation type, etc.). Data input routines, whether through manual keyboard entry, digitizing, or scanning, require a considerable amount of time.

b. Data Storage

Data storage refers to the way in which spatial data are structured and organized within the GIS according to their location, interrelationship, and attribute design. Computers permit large amounts of data to be stored, either on the computer's hard disk or in portable diskettes.

c. Data Manipulation and Processing

Data manipulation and processing are performed to obtain useful information from data previously entered into the system. Data manipulation embraces two types of operations: (1) operations needed to remove errors and update current data sets (editing); and (2) operations using analytical techniques to answer specific questions formulated by the user. The manipulation process can range from the simple overlay of two or more maps to a complex extraction of disparate pieces of information from a wide variety of sources.

58

Page 59: Gis and remote sensings

d. Data Output

Data output refers to the display or presentation of data employing commonly used output formats that include maps, graphs, reports, tables, and charts, either as a hard-copy, as an image on the screen, or as a text file that can be carried into other software programs for further analysis.

GIS applications

Geographic information systems (GIS) (also known as Geospatial information systems) are computer software and hardware systems that enable users to capture, store, analyse and manage spatially referenced data. GISs have transformed the way spatial (geographic) data, relationships and patterns in the world are able to be interactively queried, processed, analysed, mapped, modelled, visualised, and displayed for an increasingly large range of users, for a multitude of purposes.

GIS in environmental contamination

GIS in environmental contamination is the use of GIS software in mapping out the contaminants in soil and water using the spatial interpolation tools from GIS. Soil and water contamination by metals and other contaminants have become a major environmental problem after the industrialization across many parts of the world. As a result, environmental agencies are placed in charge in remediating, monitoring, and mitigating the soil contamination sites. GIS is used to monitor the sites for metal contaminants in the soil, and based on the GIS analysis, highest risk sites are identified in which majority of the remediation and monitoring takes place. GIS is used in making spatial interpolations of contaminants in the soil and water. Spatial interpolation allows for more efficient approach to remediation and monitoring of soil and water contaminants.

GIS in Soil Contamination

Soil contamination from heavy elements can be found in the urban environments, which can be attributed to the transportation and industries along with the background levels (minerals-leaching heavy elements from weathering). Also, some of the most soil contaminated areas are around the mines such as the ones in Slovenia, Bosnia and Herzegovina, and in United States (Sulphur Bank Superfund Site, in California). In a study area, GIS is used for the analysis of spatial relationship of the contaminants within the soil.

Global Positioning System

The Global Positioning System (GPS) is a space-based satellite navigation system that provides location and time information in all weather conditions, anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites.[1] The system provides critical capabilities to military, civil and commercial users around the

59

Page 60: Gis and remote sensings

world. It is maintained by the United States government and is freely accessible to anyone with a GPS receiver.

The GPS project was developed in 1973 to overcome the limitations of previous navigation systems, integrating ideas from several predecessors, including a number of classified engineering design studies from the 1960s. GPS was created and realized by the U.S. Department of Defense (DoD) and was originally run with 24 satellites. It became fully operational in 1995. Bradford Parkinson, Roger L. Easton, and Ivan A. Getting are credited with inventing it.

Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS system and implement the next generation of GPS III satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the White House in 1998 initiated these changes. In 2000, the U.S. Congress authorized the modernization effort, GPS III.

In addition to GPS, other systems are in use or under development. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s. There are also the planned European Union Galileo positioning system, India's Indian Regional Navigational Satellite System and Chinese Compass navigation system.

History

The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s and used by the British Royal Navy during World War II.

Predecessors

In 1956, the German-American physicist Friedwardt Winterberg proposed a test of general relativity (for time slowing in a strong gravitational field) using accurate atomic clocks placed in orbit inside artificial satellites. (Later, calculations using general relativity determined that the clocks on GPS satellites would be seen by Earth's observers to run 38 microseconds faster per day, and this was corrected for in the design of GPS.)

The Soviet Union launched the first man-made satellite, Sputnik, in 1957. Two American physicists, William Guier and George Weiffenbach, at Johns Hopkins's Applied Physics Laboratory(APL), decided to monitor Sputnik's radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required. The next spring, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem—pinpointing the user's location given that of the satellite. (The Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the Transit system. In 1959, ARPA (renamed DARPA in 1972) also played a role in Transit.

60

Page 61: Gis and remote sensings

The first satellite navigation system, Transit, used by the United States Navy, was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour. In 1967, the U.S. Navy developed the Timation satellite that proved the ability to place accurate clocks in space, a technology required by GPS. In the 1970s, the ground-based Omega Navigation System, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.

While there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation for a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles(ICBMs). Considered vital to the nuclear-deterrence posture, accurate determination of the SLBM launch position was a force multiplier.

Precise navigation would enable United States submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The Navy and Air Force were developing their own technologies in parallel to solve what was essentially the same problem. To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (such as Russian SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.

In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was worked in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for Air Force bombers as well as ICBMs. Updates from the Navy Transit system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory continued advancements with their Timation (Time Navigation) satellites, first launched in 1967, and with the third one in 1974 carrying the first atomic clock into orbit.

Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters from known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969. Decades later, during the early years of GPS, civilian surveying became one of the first fields to make use of the new technology, because surveyors could reap benefits of signals from the less-than-complete GPS constellation years before it was declared

61

Page 62: Gis and remote sensings

operational. GPS can be thought of as an evolution of the SECOR system where the ground-based transmitters have been migrated into orbit.

Development

With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program.

During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that "the real synthesis that became GPS was created." Later that year, the DNSS program was named Navstar, or Navigation System Using Timing and Ranging. With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS, which was later shortened simply to GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (with one prototype being destroyed in a launch failure).

After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983 after straying into the USSR's prohibited airspace,[22] in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first satellite was launched in 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment, but including the costs of the satellite launches, has been estimated to be about USD$5 billion (then-year dollars). Roger L. Easton is widely credited as the primary inventor of GPS.

Initially, the highest quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton ordering Selective Availability to be turned off at midnight May 1, 2000, improving the precision of civilian GPS from 100 to 20 meters (328 to 66 ft). The executive order signed in 1996 to turn off Selective Availability in 2000 was proposed by the U.S. Secretary of Defense, William Perry, because of the widespread growth of differential GPS services to improve civilian accuracy and eliminate the U.S. military advantage. Moreover, the U.S. military was actively developing technologies to deny GPS service to potential adversaries on a regional basis.

Over the last decade, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all while maintaining compatibility with existing GPS equipment.

GPS modernization has now become an ongoing initiative to upgrade the Global Positioning System with new capabilities to meet growing military, civil, and commercial needs. The program is being implemented through a series of satellite acquisitions, including GPS Block III and the Next Generation Operational Control System (OCX). The U.S. Government continues to improve the GPS space and ground segments to increase performance and accuracy.

62

Page 63: Gis and remote sensings

GPS is owned and operated by the United States Government as a national resource. Department of Defense (DoD) is the steward of GPS. Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the deputy secretaries of defense and transportation. Its membership includes equivalent-level officials from the departments of state, commerce, and homeland security, the joint chiefs of staff, and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.

The DoD is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis," and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses."

Timeline and modernization

Summary of satellites

BlockLaunchPeriod

Satellite launchesCurrently in orbit

and healthySuc-cess

Fail-ure

In prep-aration

Plan-ned

I 1978–1985 10 1 0 0 0

II 1989–1990 9 0 0 0 0

IIA 1990–1997 19 0 0 0 9

IIR 1997–2004 12 1 0 0 12

IIR-M 2005–2009 8 0 0 0 7

IIF From 2010 5 0 7 0 5

IIIA From 2014 0 0 0 12 0

IIIB — 0 0 0 8 0

63

Page 64: Gis and remote sensings

IIIC — 0 0 0 16 0

Total 62 2 7 36 32

PRN 01 from Block IIR-M is unhealthyPRN 25 from Block IIA is unhealthyPRN 32 from Block IIA is unhealthyPRN 27 from Block IIA is unhealthy

In 1972, the USAF Central Inertial Guidance Test Facility (Holloman AFB), conducted developmental flight tests of two prototype GPS receivers over White Sands Missile Range, using ground-based pseudo-satellites.

In 1978, the first experimental Block-I GPS satellite was launched. In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 that

strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed, although it had been previously published [in Navigation magazine] that the CA code (Coarse Acquisition code) would be available to civilian users.

By 1985, ten more experimental Block-I satellites had been launched to validate the concept. Command & Control of these satellites had moved from Onizuka AFS, CA and turned over to the 2nd Satellite Control Squadron (2SCS) located at Falcon Air Force Station in Colorado Springs, Colorado.

On February 14, 1989, the first modern Block-II satellite was launched. The Gulf War from 1990 to 1991 was the first conflict in which GPS was widely used. In 1991, a project to create a miniature GPS receiver successfully ended, replacing

the previous 50 pound military receivers with a 2.75 pound handheld receiver. In 1992, the 2nd Space Wing, which originally managed the system, was inactivated

and replaced by the 50th Space Wing. By December 1993, GPS achieved initial operational capability (IOC), indicating a

full constellation (24 satellites) was available and providing the Standard Positioning Service (SPS).

Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).

In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS to be a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.

In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety and in 2000 the United States Congress authorized the effort, referring to it as GPS III.

64

Page 65: Gis and remote sensings

On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing users to receive a non-degraded signal globally.

In 2004, the United States Government signed an agreement with the European Community establishing cooperation related to GPS and Europe's planned Galileo system.

In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.

November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.

In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.

On September 14, 2007, the aging mainframe-based Ground Segment Control System was transferred to the new Architecture Evolution Plan.

On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.

On May 21, 2009, the Air Force Space Command allayed fears of GPS failure saying "There's only a small risk we will not continue to exceed our performance standard."

On January 11, 2010, an update of ground control systems caused a software incompatibility with 8000 to 10000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, Calif.

On February 25, 2010, the U.S. Air Force awarded the contract to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.

Awards

On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the nation's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the USAF, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago."

Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:

Ivan Getting , emeritus president of The Aerospace Corporation and an engineer at the Massachusetts Institute of Technology, established the basis for GPS, improving on the World War II land-based radio system called LORAN(Long-range Radio Aid to Navigation).

Bradford Parkinson , professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.

65

Page 66: Gis and remote sensings

GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006.

In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.

Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010 for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B.

On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.

Basic Concept of GPS

A GPS receiver calculates its position by precisely timing the signals sent by GPS satellites high above the Earth. Each satellite continually transmits messages that include:

the time the message was transmitted and, satellite position at time of message transmission.

The receiver uses the messages it receives to determine the transit time of each message and computes the distance to each satellite using the speed of light. Each of these distances and satellites' locations defines a sphere. The receiver is on the surface of each of these spheres when the distances and the satellites' locations are correct. These distances and satellites' locations are used to compute the location of the receiver using the navigation equations. This location is then displayed, perhaps with a moving map display or latitude and longitude; elevation or altitude information may be included, based on height above the geoid (e.g. EGM96).

Basic GPS measurements yield only a position, and neither speed nor direction. However, most GPS units can automatically derive velocity and direction of movement from two or more position measurements. The disadvantage of this principle is that changes in speed or direction can only be computed with a delay, and that derived direction becomes inaccurate when the distance travelled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.

In typical GPS operation, four or more satellites must be visible to obtain an accurate result. Four sphere surfaces typically do not intersect. Because of this, it can be said with confidence that when the navigation equations are solved to find an intersection, this solution gives the position of the receiver along with the difference between the time kept

66

Page 67: Gis and remote sensings

by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a very large, expensive, and power hungry clock. The very accurately computed time is used only for display or not at all in many GPS applications, which use only the location. A number of applications for GPS do make use of this cheap and highly accurate timing. These include time transfer, traffic signal timing, and synchronization of cell phone base stations.

Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship or aircraft may have known elevation. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.

Structure

The current GPS consists of three major segments. These are the space segment (SS), a control segment (CS), and a user segment (US). The U.S. Air Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.

The space segment is composed of 24 to 32 satellites in medium Earth orbit and also includes the payload adapters to the boosters required to launch them into orbit. The control segment is composed of a master control station, an alternate master control station, and a host of dedicated and shared ground antennas and monitor stations. The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial, and scientific users of the Standard Positioning Service (see GPS navigation devices).

Space segment

The space segment (SS) is composed of the orbiting GPS satellites, or Space Vehicles (SV) in GPS parlance. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half a sidereal day, i.e., 11 hours and 58 minutes so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from almost everywhere on Earth's surface. The result of this objective is that the four satellites are not evenly spaced (90 degrees) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30, 105, 120, and 105 degrees apart which sum to 360 degrees.

Orbiting at an altitude of approximately 20,200 km (12,600 mi); orbital radius of approximately 26,600 km (16,500 mi), each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from

67

Page 68: Gis and remote sensings

one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.

As of December 2012, there are 32 satellites in the GPS constellation. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a non uniform arrangement. Such an arrangement was shown to improve reliability and availability of the system, relative to a uniform system, when multiple satellites fail. About nine satellites are visible from any point on the ground at any one time (see animation at right), ensuring considerable redundancy over the minimum four satellites needed for a position.

Control segment

The control segment is composed of:

1. a master control station (MCS),2. an alternate master control station,3. four dedicated ground antennas, and4. six dedicated monitor stations.

The MCS can also access U.S. Air Force Satellite Control Network (AFSCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Air Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island,Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington DC. The tracking information is sent to the Air Force Space Command MCS at Schriever Air Force Base 25 km (16 mi) ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Air Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.

Satellite maneuvers are not precise by GPS standards. So to change the orbit of a satellite, the satellite must be marked unhealthy, so receivers will not use it in their calculation. Then the maneuver can be carried out, and the resulting orbit tracked from the ground. Then the new ephemeris is uploaded and the satellite marked healthy again.

The Operation Control Segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports global GPS users and keeps the GPS system operational and performing within specification.

OCS successfully replaced the legacy 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported the U.S. armed forces. OCS will

68

Page 69: Gis and remote sensings

continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional.

The new capabilities provided by OCX will be the cornerstone for revolutionizing GPS's mission capabilities, and enabling Air Force Space Command to greatly enhance GPS operational services to U.S. combat forces, civil partners and myriad domestic and international users.

The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX expected to cost millions less than the cost to upgrade OCS while providing four times the capability.

The GPS OCX program represents a critical part of GPS modernization and provides significant information assurance improvements over the current GPS OCS program.

OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.

Built on a flexible architecture that can rapidly adapt to the changing needs of today's and future GPS users allowing immediate access to GPS data and constellations status through secure, accurate and reliable information.

Empowers the war fighter with more secure, actionable and predictive information to enhance situational awareness.

Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.

Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.

Supports higher volume near real-time command and control capabilities and abilities.

On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development.

The GPS OCX program has achieved major milestones and is on track to support the GPS IIIA launch in May 2014.

User segment

The user segment is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user. A receiver is often described by its number of channels: this signifies how many satellites it can monitor simultaneously. Originally limited to four or five, this has progressively increased over the years so that, as of 2007, receivers typically have between 12 and 20 channels.

69

Page 70: Gis and remote sensings

GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. As of 2006, even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.

Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA),  references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.

Applications

While originally a military project, GPS is considered a dual-use technology, meaning it has significant military and civilian applications.

GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.

Civilian

Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.

Astronomy : both positional and clock synchronization data is used in Astrometry and Celestial mechanics calculations. It is also used in amateur astronomy using small telescopes to professionals observatories, for example, while finding extra solar planets.

Automated vehicle : applying location and routes for cars and trucks to function without a human driver.

Cartography : both civilian and military cartographers use GPS extensively. Cellular telephony : clock synchronization enables time transfer, which is critical for

synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.

Clock synchronization : the accuracy of GPS time signals (±10 ns) is second only to the atomic clocks upon which they are based.

70

Page 71: Gis and remote sensings

Disaster relief /emergency services: depend upon GPS for location and timing capabilities.

Meteorology-Upper Airs : measure and calculate the atmospheric pressure, wind speed and direction up to 27 km from the earth's surface

Fleet Tracking : the use of GPS technology to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.

Geo fencing : vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate a vehicle, person, or pet. These devices are attached to the vehicle, person, or the pet collar. The application provides continuous tracking and mobile or Internet updates should the target leave a designated area.

Geo tagging : applying location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1

GPS Aircraft Tracking GPS for Mining : the use of RTK GPS has significantly improved several mining

operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.

GPS tours : location determines what content to display; for instance, information about an approaching point of interest.

Navigation : navigators value digitally precise velocity and orientation measurements.

Phasor measurements : GPS enables highly accurate time stamping of power system measurements, making it possible to compute phasors.

Recreation : for example, geocaching, geodashing, GPS drawing and waymarking. Robotics : self-navigating, autonomous robots using GPS sensors, which calculate

latitude, longitude, time, speed, and heading. Surveying : surveyors use absolute locations to make maps and determine property

boundaries. Tectonics : GPS enables direct fault motion measurement in earthquakes. Telematics : GPS technology integrated with computers and mobile communications

technology in automotive navigation systems

Restrictions on civilian use

The U.S. Government controls the export of some civilian receivers. All GPS receivers capable of functioning above 18 kilometers (11 mi) altitude and 515 meters per second (1,001 km) or designed, modified for use with unmanned air vehicles like e.g. ballistic or cruise missile systems are classified as munitions (weapons) for which State Department export licenses are required.

This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.

Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target

71

Page 72: Gis and remote sensings

altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach 30 kilometers (19 mi).

These limits only apply to units exported from (or which have components exported from) the USA – there is a growing trade in various components, including GPS units, supplied by other countries, which are expressly sold as ITAR-free.

Military

As of 2009, military applications of GPS include:

Navigation: GPS allows soldiers to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commanders Digital Assistant and lower ranks use the Soldier Digital Assistant.

Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets (for example, gun camera video from AH-1 Cobras in Iraq show GPS co-ordinates that can be viewed with specialized software).

Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and Artillery projectiles. Embedded GPS receivers able to withstand accelerations of 12,000 g or about 118 km/s2 have been developed for use in 155 millimeters (6.1 in) howitzers.

Search and Rescue: Downed pilots can be located faster if their position is known. Reconnaissance: Patrol movement can be managed more closely. GPS satellites carry a set of nuclear detonation detectors consisting of an optical

sensor (Y-sensor), an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System. General William Shelton has stated that this feature may be dropped from future satellites in order to save money.

Error Sources and Analysis

GPS error analysis examines the sources of errors in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects but there are still residual errors which are not corrected. Sources of error include signal arrival time measurements, numerical calculations, atmospheric effects, ephemeris and clock data, multipath signals, and natural and artificial interference. The magnitude of the residual errors resulting from these sources is dependent on geometric dilution of precision.

Artificial errors may result from jamming devices and threaten ships and aircraft.

Accuracy Enhancement and Surveying

72

Page 73: Gis and remote sensings

Augmentation

Integrating external information into the calculation process can materially improve accuracy. Such augmentation systems are generally named or described based on how the information arrives. Some systems transmit additional error information (such as clock drift, ephemera, or ionospheric delay), others characterize prior errors, while a third group provides additional navigational or vehicle information.

Examples of augmentation systems include the Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Differential GPS (DGPS), Inertial Navigation Systems (INS) and Assisted GPS. The standard accuracy of about 15 metres (49 feet) can be augmented to 3–5 metres (9.8–16.4 ft) with DGPS, and to about 3 metres (9.8 feet) with WAAS.

Precise monitoring

Accuracy can be improved through precise monitoring and measurement of existing GPS signals in additional or alternate ways.

The largest remaining error is usually the unpredictable delay through the ionosphere. The spacecraft broadcast ionospheric model parameters, but some errors remain. This is one reason GPS spacecraft transmit on at least two frequencies, L1 and L2. Ionospheric delay is a well-defined function of frequency and the total electron content (TEC) along the path, so measuring the arrival time difference between the frequencies determines TEC and thus the precise ionospheric delay at each frequency.

Military receivers can decode the P(Y) code transmitted on both L1 and L2. Without decryption keys, it is still possible to use a codeless technique to compare the P(Y) codes on L1 and L2 to gain much of the same error information. However, this technique is slow, so it is currently available only on specialized surveying equipment. In the future, additional civilian codes are expected to be transmitted on the L2 and L5 frequencies, Then all users will be able to perform dual-frequency measurements and directly compute ionospheric delay errors.

A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). This corrects the error that arises because the pulse transition of the PRN is not instantaneous, and thus the correlation (satellite-receiver sequence matching) operation is imperfect. CPGPS uses the L1 carrier wave, which has

a period of  , which is about one-thousandth of

the C/A Gold code bit period of  , to act as an additional clock signal and resolve the uncertainty. The phase difference error in the normal GPS amounts to 2–3 metres (6.6–9.8 ft) of ambiguity. CPGPS working to within 1% of perfect transition reduces this error to 3 centimeters (1.2 in) of ambiguity. By eliminating this error source, CPGPS coupled with DGPS normally realizes between 20–30 centimetres (7.9–11.8 in) of absolute accuracy.

73

Page 74: Gis and remote sensings

Relative Kinematic Positioning (RKP) is a third alternative for a precise GPS-based positioning system. In this approach, determination of range signal can be resolved to a precision of less than 10 centimeters (3.9 in). This is done by resolving the number of cycles that the signal is transmitted and received by the receiver by using a combination of differential GPS (DGPS) correction data, transmitting GPS signal phase information and ambiguity resolution techniques via statistical tests—possibly with processing in real-time (real-time kinematic positioning, RTK).

Timekeeping 

Leap seconds

While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to GPS time (GPST; see the page of United States Naval Observatory). The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI − GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.

The GPS navigation message includes the difference between GPS time and UTC. As of July 2012, GPS time is 16 seconds ahead of UTC because of the leap second added to UTC June 30, 2012. Receivers subtract this offset from GPS time to calculate UTC and specific timezone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).

Accuracy

GPS time is theoretically accurate to about 14 nanoseconds. However, most receivers lose accuracy in the interpretation of the signals and are only accurate to 100 nanoseconds.

Format

As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern the modernized GPS navigation message uses a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until the year 2137 (157 years after GPS week zero).

Regulatory Spectrum Issues Concerning GPS receivers

In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold

74

Page 75: Gis and remote sensings

in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation." With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers, "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum.". For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.

The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Light Squared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company Light Squared. On March 1, 2001, the FCC received an application from Light Squared's predecessor, Motient Services to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with Light Squared to prevent transmissions from Light Squared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for Light Squared to deploy a ground-based network ancillary to their satellite system - known as the Ancillary Tower Components (ATCs) - "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."  This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Air Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration, Interior, and U.S. Department of Transportation.

In January 2011, the FCC conditionally authorized Light Squared's wholesale customers, such as Best Buy, Sharp, and C Spire, to be able to only purchase an integrated satellite-ground-based service from In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation." With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers, "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum." For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.

The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Light Squared is the Mobile Satellite Service band.  Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company Light Squared. On March 1, 2001, the FCC received an application from Light Squared's predecessor, Motient Services to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with Light

75

Page 76: Gis and remote sensings

Squared to prevent transmissions from Light Squared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for Light Squared to deploy a ground-based network ancillary to their satellite system - known as the Ancillary Tower Components (ATCs) - "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service."  This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Air Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration, Interior, and U.S. Department of Transportation.[116]

In January 2011, the FCC conditionally authorized Light Squared's wholesale customers, such as Best Buy, Sharp, and C Spire, to be able to only purchase an integrated satellite-ground-based service from Light Squared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using Light Squared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that Light Squared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based Light Squared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a Light Squared led working group along with GPS industry and Federal agency participation.

GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. However, as regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.

The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as Light Squared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum." In those 2003 rules, the FCC stated "As a preliminary matter, terrestrial [Commercial Mobile Radio Service (“CMRS”)] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominately different market segments... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base..." In 2004, the FCC clarified that the ground-based towers would be ancillary, noting that "We will authorize MSS ATC subject to

76

Page 77: Gis and remote sensings

conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected Light Squared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector." However, GPS receiver manufacturers have argued that Light Squared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of Light Squared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS."

The FCC and Light Squared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. However, according to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it." The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.

On February 14, 2012, the U.S. Federal Communications Commission (FCC) moved to bar Light Squared's planned national broadband network after being informed by the National Telecommunications and Information Administration(NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". Light Squared is challenging the FCC's action.

and re-sell that integrated service on devices that are equipped to only use the ground-based signal using Light Squared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that Light Squared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based Light Squared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a Light Squared led working group along with GPS industry and Federal agency participation.

GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. However, as regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of

77

Page 78: Gis and remote sensings

spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.

The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as Light Squared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum." In those 2003 rules, the FCC stated "As a preliminary matter, terrestrial [Commercial Mobile Radio Service (“CMRS”)] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominately different market segments... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...” In 2004, the FCC clarified that the ground-based towers would be ancillary, noting that "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected Light Squared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector." However, GPS receiver manufacturers have argued that Light Squared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of Light Squared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition to Save Our GPS."

The FCC and Light Squared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. However, according to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it." The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.

On February 14, 2012, the U.S. Federal Communications Commission (FCC) moved to bar Light Squared's planned national broadband network after being informed by the National Telecommunications and Information Administration(NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". Light Squared is challenging the FCC's action.

78