39
TELEVISION AND VIDEO ENGINEERING UNIT-1 FUNDAMENTALS OF TELEVISION SYLLABUS Television System and scanning Principles: Sound and picture transmission scanning process video signals characteristics of human eye brightness perception and Photometric qualities Aspect ratio and Rectangular scanning persistence of vision and flicker vertical resolution Kell factor Horizontal Resolution and video bandwidth Interlaced scanning Camera tubes camera lenses auto focus systems camera pick-up devices Image orthicon vidicon plumbicon silicon diode array vidicon CCDsolid state image scanners Comparison of Camera tubes

unit 1 AVE ENGINEERING

Embed Size (px)

DESCRIPTION

LECTURE NOTES

Citation preview

Page 1: unit 1 AVE ENGINEERING

TELEVISION AND VIDEO ENGINEERING

UNIT-1 FUNDAMENTALS OF TELEVISION

SYLLABUS

Television System and scanning Principles: Sound and picture transmission

scanning process

video signals

characteristics of human eye

brightness perception and Photometric qualities

Aspect ratio and Rectangular scanning

persistence of vision and flicker

vertical resolution

Kell factor

Horizontal Resolution and video bandwidth

Interlaced scanning

Camera tubes

camera lenses

auto focus systems

camera pick-up devices

Image orthicon

vidicon

plumbicon

silicon diode array vidicon

CCDsolid state image scanners

Comparison of Camera tubes

camera tube deflection unit

video processing of camera signals

color television signals and systems

Page 2: unit 1 AVE ENGINEERING

1.1 TELEVISION SYSTEM AND SCANNING PRINCIPLES

A television system is a Canadian term for a group of television stations which

share common ownership, branding, and programming, but are not legally considered a

full television network. Systems may be informally referred to as networks by some

people, but are not true networks under current Canadian broadcasting regulations.

Systems are differentiated from networks primarily by their less extensive service area

while a network will serve most Canadian broadcast markets in some form, a system will

typically serve only a few markets. As well, a system may or may not offer some classes

of programming, such as a national newscast, which are typically provided by a network.

Television systems should not be confused with twinsticks, although some individual

stations might be part of both types of operations simultaneously.

1.2 SCANNING PROCESS

Fig: 1.1

The scanning technique is simillar to reading and writing information on a page,

starting at top left and processing to end at right bottom.

Page 3: unit 1 AVE ENGINEERING

The scanning is done by line by line.

Horizontal scanning from left to right at fast rate and vertically from top to bottom

at slow rate.

The retrace and trace period obtained during scanning of lines.

The retrace of beam is very fast compared to forward scan by cutting off the beam

during horizontal and vertical flyback intervals.

1.3 VIDEO SIGNALS

Camera signal corresponding to picture or scene is transmitted.

Blanking pulses to make horizontal and vertical retrace as invisible.

Sync pulses to synchronize the transmitter and receiver scanning system.

Information about some color signal and some sample of color sub-carrier

frequency.

Fig: 1.2

1.4 CHARACTERISTICS OF HUMAN EYE

Formulating therequirements of camera tube scanning and reproducing system,

this must considered.

Characteristics are

1. Visual activity-resolving fine details in picture

2. Persistence of vision

3. Brightness and color sensation.

Page 4: unit 1 AVE ENGINEERING

Fig: 1.3

Fig:1.4

1.5 BRIGHTNESS PERCEPTION AND PHOTOMETRIC

QUALITIES

brightness - the apparent luminance of a patch in an image.

Lightness - apparent reflectance of a perceived surface. the perceived light level

of a patch relative to other patches in the same image.

1.5.1 Photometric Measurements

Page 5: unit 1 AVE ENGINEERING

Quantitative determinations of the values of quantities characterizing optical

radiation OR such optical properties of materials as transparency and reflectivity.

Photometric measurements can be made with instruments that contain optical detectors.

In the simplest cases in the visible light range, the human eye is used as a detector in

evaluating photometric quantities.

Table 1. Principal photometric qualities

Quantity Symbol Defining

equation

Unit

Name Symbol

Luminous flux ϕv   lumen lm

Luminous energy Q Q = ʃϕvdt lumen-second lm-s

Luminous intensity (of a light

source in some direction) .

l l=d Φv/dΩ candela cd

Luminous efficacy of radiant

power

K K = Φv/Φe lumen per watt lm/w

Luminance (at a given point and

in a given direction)

L candela per square

meter (formerly, nit) lux

cd/m2

Illuminance (at a point of a

surface).

E E = dϕv/dA lux lx

Luminous exitance M M=dϕv/dA lumen per square meter lm/m2

Exposure (quantity of

illumination).

H H=dQ/dA = ∫E

dt

lux-second lx-s

Luminous pulse remittance θ θ=⊄ldt candale-section cd-s

Spectral concentration of a

photometric quantity

Xλ Xλ = dx/dλ

Page 6: unit 1 AVE ENGINEERING

1.6 ASPECT RATIO AND RECTANGULAR SCANNING

1.6.1 ASPECT RATIO

The aspect ratio of an image is the ratio of the width of the image to its height,

expressed as two numbers separated by a colon. That is, for an x:y aspect ratio, no matter

how big or small the image is, if the width is divided into x units of equal length and the

height is measured using this same length unit, the height will be measured to be y units.

For example, consider a group of images, all with an aspect ratio of 16:9. One image is

16 inches wide and 9 inches high. Another image is 16 centimeters wide and 9

centimeters high. A third is 8 yards wide and 4.5 yards high. Two dimensional sector

scanning which a slow sector scanning section is superimposed on a rapid sector in a

scanning perpendicular direction.

1.6.2 RECTANGULAR SCANNING

Rectangular or Progressive scanning, as opposed to interlaced, scans the

entire picture line by line every sixteenth of a second. In other words, captured

images are not split into separate fields like in interlaced scanning. Computer

monitors do not need interlace to show the picture on the screen. It puts them

on one line at a time in perfect order i.e. 1, 2, 3, 4, 5, 6, 7 etc. so there is

virtually no "flickering" effect. As such, in a surveillance application, it can be

critical in viewing detail within a moving image such as a person running away.

However, a high quality monitor is required to get the best out of this type of

scan.

Example: Capturing moving objects

Page 7: unit 1 AVE ENGINEERING

When a camera captures a moving object, the sharpness of the frozen

image will depend on the technology used. Compare these JPEG images, captured by

three different cameras using progressive scan, 4CIF interlaced scan and 2CIF

respectively.

1.7 PERSISTENCE OF VISION AND FLICKER

The rate of 24 pictures/sec in motion pictures and scanning of 25 frames/sec in

TV pictures is enough to cause image continuity.

They are not enough to allow brightness of 1 frame to blend smoothly into next

through the time, when the screen is blanked between suceesive frames.

This results in FLICKER of light that is annoying to observer when the screen is

made alternately bright and dark.

This problem is solved in motion pictures by showing a picture twice, so 48 views

of scene are shown per second. Although the same 24 pictures frame per second.

1.9 KELL FACTOR

The Kell factor, named after RCA engineer Raymond D. Kell, is a parameter

used to limit the bandwidth of a sampled image signal to avoid the appearance of beat

frequency patterns when displaying the image in a discrete display devices, usually taken

to be 0.7. The number was first measured in 1934 by Raymond D. Kell and his associates

as 0.64 but has suffered several revisions given that it is based on image perception,

hence subjective, and is not independent of the type of display. It was later revised to

0.85 but can go higher than 0.9, when fixed pixel scanning (e.g., CCD or CMOS) and

fixed pixel displays (e.g., LCD or plasma) are used, or as low as 0.7 for electron gun

scanning.

From a different perspective, the Kell factor defines the effective resolution

of a discrete display device since the full resolution cannot be used without viewing

experience degradation. The actual sampled resolution will depend on the spot size and

intensity distribution. For electron gun scanning systems, the spot usually has a Gaussian

Page 8: unit 1 AVE ENGINEERING

intensity distribution. For CCDs, the distribution is somewhat rectangular, and is also

affected by the sampling grid and inter-pixel spacing.

Kell factor is sometimes incorrectly stated to exist to account for the effects

of interlacing. Interlacing itself does not affect Kell factor, but because interlaced video

must be low-pass filtered (i.e., blurred) in the vertical dimension to avoid spatio-temporal

aliasing (i.e., flickering effects), the Kell factor of interlaced video is said to be about

70% that of progressive video with the same scan line resolution.

1.10 VERTICAL AND HORIZONTAL RESOLUTION

The "vertical resolution" of NTSC TV refers to the total number of lines

(rows) scanned from left to right across the screen - BUT Counted from Top to Bottom,

or Vertically. This number is set by the NTSC TV 'Standard' .This Vertical Resolution

number is static - it doesn't change. Therefore, the Vertical Resolution is the same for

ALL TV's manufactured to meet a specified Standard.

The horizontal resolution of television, and other video displays, is dependent

upon the quality of the video signal's source. As an example - the horizontal resolution of

VHS tape is (about) 240 lines; broadcast TV (about) 330 lines, laserdisc (about) 420

lines; and DVD (about) 480 lines.

To avoid getting entangled too deeply within the inherent complexities of TV

technology, it's sufficient to note that there are a number of variables contributing to the

'stated' horizontal resolution value. Even the measurement methods are not always

consistent. For instance - how the vertical columns (dots/dashes) are counted ... as single

black / white (dark and light) lines, or as "line pairs - (1) black and (1) white line."

A TV's resolution can be reported as the result of counting the total number of

picture elements (pixels) per scan line, across the entire screen-width, multiplied by the

total number of scan lines. However, TV screen-sizes vary, making an equal comparison

of different displays more complex. TV's also differ technically, functionally and in

component quality; this results in additional complications.

Page 9: unit 1 AVE ENGINEERING

An alternative method is to count the number of pixels that fit within a prescribed

circle, having a diameter equal to the screen height. Known as LPH - Lines per Picture

Height - this is the 'correct' method in determining TV resolution.

As this shows, along with other, similar variables, the accuracy of a 'stated'

horizontal resolution for a particular display, may depend on who is doing the 'stating' .

However, for the purpose of this overview of HDTV-Resolution, the primary point

regarding horizontal resolution, is that it is variable. Unlike vertical resolution which is

'fixed,' horizontal resolution can differ from one TV display to another.

1.11 VIDEO BANDWIDTH

BWS = 1/2 [(K × AR × (VLT)² × FR) × (KH / KV)]

Where:

BWS = Total signal bandwidth

K = Kell factor

AR = Aspect ratio (the width of the display divided by the height of the display)

VLT = Total number of vertical scan lines

FR = Frame rate or refresh rate

KH = Ratio of total horizontal pixels to active pixels

KV = Ratio of total vertical lines to active lines

The circuits that process video signals need to have more bandwidth

than the actual bandwidth of the processed signal to minimize the degradation of the

signal and the resulting loss in picture quality. The amount the circuit bandwidth needs to

exceed the highest frequency in the signal is a function of the quality desired. To

calculate this, we assume a single-pole response and use the following equation:

H(f)(dB) = 20log(1/(1+(BWS/BW-3dB)²).5)

Rearranging and solving for the -0.1dB and the -0.5dB attenuation points, we get the

following:

BW-3dB min = BWS (-0.1db) × 6.55

Page 10: unit 1 AVE ENGINEERING

BW-3dB-min = BWS(-0.5db) × 2.86

Where:

BW-3dB = the minimum -3db bandwidth required for the circuit

A minimum bandwidth that's about six and a half times' the highest frequency in

the signal. If you can tolerate 0.5dB attenuation, it needs to be only about three times. To

account for normal variations in the bandwidth of integrated circuits, it is recommended

that the results from equations 3 and 4 be multiplied by a factor of 1.5. This will ensure

that the attenuation performance is met over worst-case conditions. In equation mode, it

is expressed as follows:

BW-3dB nominal = BW-3dB-min × 1.5

In addition to bandwidth, the circuits must slew fast enough to faithfully reproduce the

video signal. The equation for the minimum slew rate is as follows:

SRMIN = 2 × pi × BWS × Vpeak

Substituting and simplifying,

SRMIN = BWS × 6.386

This is because some distortion can occur as the frequency of the signal approaches the

slew-rate limit. This can introduce frequency distortion, which will degrade the picture

quality. Multiplying the equation 6 result by a factor of at least two or three will ensure

that the distortion is minimized.

In equation form:

SRnominal = SRMIN × 2

As an example, let's assume we have a standard NTSC video signal and the following

requirements:

VLT = 525

TVL = 346

AR = 1.3333

KH = 1.17

FR = 29.94

KV = 1.09

Page 11: unit 1 AVE ENGINEERING

Using equation 1, we calculate a maximum signal bandwidth (BWS) of about 4.2MHz.

This is the highest frequency in the signal. Now let's assume that we need less than 0.1dB

attenuation. Using equation 3, we calculate the minimum signal bandwidth necessary to

be 27.5MHz. Using equation 5, to account for variations, gives 41.3MHz. This is the

circuit -3dB bandwidth required to achieve our desired resolution and maintain the signal

quality. The last calculation we need to make for our example is the minimum slew-rate

requirement. Using equations 6 and 7 and plugging in the 4.2MHz value for BWS, we see

that we will need at least a slew rate of 52V/μs and a more desirable value of 80V/μs.

1.12 INTERLACED SCANNING

Interlaced scan-based images use techniques developed for Cathode Ray Tube

(CRT)-based TV monitor displays, made up of 576 visible horizontal lines across a

standard TV screen. Interlacing divides these into odd and even lines and then alternately

refreshes them at 30 frames per second. The slight delay between odd and even line

refreshes creates some distortion or 'jaggedness'. This is because only half the lines keeps

up with the moving image while the other half waits to be refreshed.

Fig: 1.5

The effects of interlacing can be somewhat compensated for by using de-interlacing. De-

interlacing is the process of converting interlaced video into a non-interlaced form, by

eliminating some jaggedness from the video for better viewing. This process is also

called line doubling. Some network video products, such as Axis video servers, integrate

a de-interlace filter which improves image quality in the highest resolution (4CIF). This

Page 12: unit 1 AVE ENGINEERING

feature eliminates the motion blur problems caused by the analog video signal from the

analog camera.

Interlaced scanning has served the analog camera, television and VHS video world very

well for many years, and is still the most suitable for certain applications. However, now

that display technology is changing with the advent of Liquid Crystal Display (LCD),

Thin Film Transistor (TFT)-based monitors, DVDs and digital cameras, an alternative

method of bringing the image to the screen, known as progressive scanning, has been

created.

1.13 CAMERA LENSES

1.13.1 Lens Focal Length

We define focal length as the distance from the optical center of the lens to the focal

plane (target or "chip") of the video camera when the lens is focused at infinity.

We consider any object in the far distance to be at infinity. On a camera lens the symbol 

∞ (similar to an "8" on its side) indicates infinity.

Since the lens-to-target distance for most lenses increases when we focus the lens on

anything closer than infinity (see second illustration), we specify infinity as the standard

for focal length measurement.

Focal length is generally measured

in millimeters. In the case of lenses with

fixed focal lengths, we can talk about a

10mm lens, a 20mm lens, a 100mm lens,

etc. As we will see, this designation tells

a lot about how the lens will reproduce

subject matter. 

 

Page 13: unit 1 AVE ENGINEERING

 

Fig:1.6

1.13.2 Zoom and Prime Lenses

Zoom lenses came into common use in the early 1960s. Before then, TV cameras

used lenses of different focal lengths mounted on a turret on the front of the camera, as

shown on the right. The cameraperson rotated each lens into position and focused it when

the camera was not on the air.

Today, most video cameras use zoom lenses. Unlike the four lenses shown here, each of

which operate at only one focal length, the effective focal length of a zoom lens can be

continuously varied. This typically means that the lens can go from a wide-angle to a

telephoto perspective.

To make this possible, zoom lenses use numerous glass elements, each of which is

precisely ground, polished, and positioned. The space between these elements changes as

the lens is zoomed in and out. (Note cutaway view on the right below.)

Page 14: unit 1 AVE ENGINEERING

Fig: 1.7

With prime lenses, the focal length of the lens cannot be varied. It might seem that

we would be taking a step backwards to use a prime lens or a lens that operates at only

one focal length.

Not necessarily. Some professional videographers and directors of photography --

especially those who have their roots in film -- feel prime lenses are more predictable in

their results. (Of course, it also depends on what you're used to using!)

Prime lenses also come in more specialized forms, for example, super wide angle, super

telephoto, and super fast (i.e., it transmits more light).

However, for normal work, zoom lenses are much easier and faster to use. The latest of

HDTV zoom lenses are extremely sharp -- almost as sharp as the best prime lenses.

1.13.3 Angle of View

Page 15: unit 1 AVE ENGINEERING

Angle of view is directly associated with

lens focal length. The longer the focal length (in

millimeters), the narrower the angle of view (in

degrees).

You can see this relationship by studying the

drawing on the left, which shows angles of view

for  different prime lenses. 

A telephoto lens (or a zoom lens operating at

maximum focal length) has a narrow angle of

view. Although there is no exact definition for a

"telephoto" designation, we would consider the

angles at the top of the drawing from about 3 to

10 degrees in the telephoto range.

The bottom of the drawing (from about 45 to 90

degrees) represents the wide-angle range.

The normal angle of view range lies between telephoto and wide angle.

With the camera in the same position, a short focal lens creates a wide view and a long

focal length creates an enlarged image in the camera.

1.14 AUTOFOCUS SYSTEMS

Page 16: unit 1 AVE ENGINEERING

Fig:1.8

There are two main ways for cameras to focus automatically: contrast detection

and phase detection.  The former uses data from the CCD or CMOS sensor and looks at

how sharp the resulting photograph would be.  It's simple, but slow, as the camera has to

go through all of the possibilities until it finds one where the subject is clearly contrasted

from the background.  The latter uses a tool that works like a rangefinder, which

accurately calculates the correction needed to get the subject in focus.  It's fast, but

difficult to operate as the light coming into the lens needs to reach both the phase detector

and the sensor (or the film) at the same time. This has meant that phase detection has

traditionally been reserved for SLRs, which already have a mirror that sends the image to

the viewfinder.  At the same time, a second mirror also sends it down to the phase

detector.  While focussing is taking place, the sensor is covered by these mirrors, which

rules out video.  SLRs that do shoot video fold their mirrors out of the way and rely on

the contrast detection found on ordinary compacts.

Page 17: unit 1 AVE ENGINEERING

1.15 CAMERA PICK UP DEVICES

The scene of picture is focused with help of lens system on a photosensitive target

near a pickup tube.

The electrical state of each area varies with intensity of light.

The electrical response of each element is read of f with help of electron beam

circuit produce electrical pulses.

The target plate is held with electrical potential with respect to cathode of pick up

tubes.

The actyal beam varies in accordance with electrical state of picture element.

The beams scans the image horizontally by means of magnetic field setup by

horizontal deflection coil.

Similarly the beams scans the image vertically by means of magnetic field setup

by vertical deflection coil.

The scanning must done in fast speed over the changing or moving pictures.

1.16 IMAGE ORTHICON

1.16.1 INTRODUCTION

The image orthicon was common in American broadcasting from 1946 until

1968. A combination of the image dissector and the orthicon technologies, it replaced the

iconoscope and the orthicon, which required a great deal of light to work adequately.

While the iconoscope and the intermediate orthicon used capacitance

between a multitude of small but discrete light sensitive collectors and an isolated signal

plate for reading video information, the image orthicon employed direct charge readings

from a continuous electronically charged collector. The resultant signal was immune to

most extraneous signal "crosstalk" from other parts of the target, and could yield

extremely detailed images. For instance, image orthicon cameras were used for capturing

Page 18: unit 1 AVE ENGINEERING

Apollo/Saturn rockets nearing orbit after the networks had phased them out, as only they

could provide sufficient detail.

An image orthicon camera can take television pictures by candlelight because

of the more ordered light-sensitive area and the presence of an electron multiplier at the

base of the tube, which operated as a high-efficiency amplifier. It also has a logarithmic

light sensitivity curve similar to the human eye. However, it tends to flare in bright light,

causing a dark halo to be seen around the object; this anomaly is referred to as

"blooming" in the broadcast industry when image orthicon tubes were in operation.

Image orthicons were used extensively in the early color television cameras, where their

increased sensitivity was essential to overcome their very inefficient optical system.

Fig:1.9

1.16.2 OPERATION

An image orthicon consists of three parts: a photocathode with an image

store ("target"), a scanner that reads this image (an electron gun), and a multistage

electron multiplier.

In the image store, light falls upon the photocathode which is a photosensitive plate at a

very negative potential (approx. -600 V), and is converted into an electron image (a

principle borrowed from the image dissector). Once the image electrons reach the target,

they cause a "splash" of electrons by the effect of secondary emission. On average, each

Page 19: unit 1 AVE ENGINEERING

image electron ejects several "splash" and these excess electrons are soaked up by the

positive mesh effectively removing electrons from the target and causing a positive

charge on it in relation to the incident light in the photocathode. The result is an image

painted in positive charge, with the brightest portions having the largest positive charge.

A sharply focused beam of electrons (a cathode ray) is generated by the

electron gun at ground potential and accelerated by the anode around the gun at a high

positive voltage (approx. +1500 V). Once it exits the electron gun, its inertia makes the

beam move away from the dynode towards the back side of the target. At this point the

electrons lose speed and get deflected by the horizontal and vertical deflection coils,

effectively scanning the target. Thanks to the axial magnetic field of the focusing coil,

this deflection is not in a straight line, thus when the electrons reach the target they do so

perpendicularly avoiding a sideways component. The target is nearly at ground potential

with a small positive charge, thus when the electrons reach the target at low speed they

are absorbed without ejecting more electrons. This adds negative charge to the positive

charge until the region being scanned reaches some threshold negative charge, at which

point the scanning electrons are reflected by the negative potential rather than absorbed

(in this process the target recovers the electrons needed for the next scan). These reflected

electrons return down the cathode ray tube toward the first dynode of the electron

multiplier surrounding the electron gun which is at high potential. The number of

reflected electrons is a linear measure of the target's original positive charge, which, in

turn, is a measure of brightness.

Additional amplification is also performed via secondary emission in the

electron multiplier which consists of a stack of charged dynodes (pinwheel-like disks

surrounding the electron gun) in progressively higher potentials. As the returning electron

beam hits the first dynode, it ejects electrons similarly to the target; for each electron

striking a dynode, many are emitted. These secondary electrons are then drawn toward

the next dynode at a higher potential, where the splashing continues for a number of

steps. Consider a single, highly energized electron hitting the first dynode, causing, say,

four electrons to be emitted and drawn towards the next dynode. Each of these might then

cause four each to be emitted. Thus, by the start of the third stage, you would have about

Page 20: unit 1 AVE ENGINEERING

16 electrons to the original one. As many as 5 to 10 stages were not unusual, thus the

achieved amplification is very important.

The mysterious "dark halo" around bright objects in an IO-captured

image is based in the very fact that the IO relies on the splashing caused by highly

energized electrons. When a very bright point of light) is captured, a great preponderance

of electrons is ejected from the image target. So many are ejected that the corresponding

point on the collection mesh can no longer soak them up, and thus they fall back to

nearby spots on the target much as splashing water when a rock is thrown in forms a ring.

Since the resultant splashed electrons do not contain sufficient energy to eject enough

electrons where they land, they will instead neutralize any positive charge in that region.

Since darker images result in less positive charge on the target, the excess electrons

deposited by the splash will be read as a dark region by the scanning electron beam.

1.17 VIDICON

1.17.1 INTRODUCTION

A vidicon tube is a video camera tube design in which the target material is a

photoconductor. While the initial photoconductor used was selenium, other targets–

including silicon diode arrays–have been used.

Fig:1.10

Page 21: unit 1 AVE ENGINEERING

1.17.2 OPERATION

The vidicon is a storage-type camera tube in which a charge-density pattern is

formed by the imaged scene radiation on a photoconductive surface which is then

scanned by a beam of low-velocity electrons. The fluctuating voltage coupled out to a

video amplifier can be used to reproduce the scene being imaged. The electrical charge

produced by an image will remain in the face plate until it is scanned or until the charge

dissipates. Pyroelectric photocathodes can be used to produce a vidicon sensitive over a

broad portion of the infrared spectrum.

1.18 PLUMBICON

1.18.1 INTRODUCTION

Plumbicon is a registered trademark of Philips for its Lead Oxide

(PbO) target vidicons. Used frequently in broadcast camera applications, these tubes have

low output, but a high signal-to-noise ratio. They had excellent resolution compared to

Image Orthicons, but lacked the artificially sharp edges of IO tubes, which caused some

of the viewing audience to perceive them as softer. CBS Labs invented the first outboard

edge enhancement circuits to sharpen the edges of Plumbicon generated images.

Fig:1.11

Page 22: unit 1 AVE ENGINEERING

1.18.2 OPERATION

Compared to Saticons, Plumbicons had much higher resistance to burn in, and

comet and trailing artifacts from bright lights in the shot. Saticons though, usually had

slightly higher resolution. After 1980, and the introduction of the diode gun plumbicon

tube, the resolution of both types was so high, compared to the maximum limits of the

broadcasting standard, that the Saticon's resolution advantage became moot. While

broadcast cameras migrated to solid state Charged Coupled Devices, plumbicon tubes

remain a staple imaging device in the medical field.[84][85][86]

Narragansett Imaging is the only company now making Plumbicons, and it does so from

the factories Philips built for that purpose in Rhode Island, USA. While still a part of the

Philips empire, the company purchased EEV's (English Electric Valve) lead oxide camera

tube business, and gained a monopoly in lead oxide tube production.

1.18.3 OTHER CAMERA DEVICES

1.18.3.1 Saticon

Saticon is a registered trademark of Hitachi also produced by Thomson and Sony. It was

developed in a joint effort by Hitachi and NHK (Japan Broadcasting Corporation). Its

surface consists of Selenium with trace amounts of Arsenic and Tellurium added

(SeAsTe) to make the signal more stable. SAT in the name is derived from (SeAsTe).

1.18.3.2 Newvicon

Newvicon is a registered trademark of Matsushita. The Newvicon tubes were

characterized by high light sensitivity. Its surface consists of a combination of Zinc

Selenide (ZnSe) and Zinc Cadmium Telluride (ZnCdTe)

1.18.3.3 Trinicon

Page 23: unit 1 AVE ENGINEERING

Trinicon is a registered trademark of Sony. It uses a vertically striped RGB color filter

over the faceplate of an otherwise standard vidicon imaging tube to segment the scan into

corresponding red, green and blue segments. Only one tube was used in the camera,

instead of a tube for each color, as was standard for color cameras used in television

broadcasting. It is used mostly in low-end consumer cameras and camcorders, though

Sony also used it in some moderate cost professional cameras in the 1980s, such as the

DXC-1800 and BVP-1 models.

1.20 CCD SOLID STATE IMAGE SCANNERS

The operation of solid state image scanners is based on MOS circuitry.

The CCD may be thought to be in a shift register formed by a string of very

closely spaced MOS capacitors.

It can store and transfer analog charge signals either electrons or holes in

electrically or optically.

The chip consists of a p-type substrate, the side is oxidized to form a film of

silicon dioxide, an insulator.

By photolithographic process, an array of metal electrodes known as gates, are

deposit on insulator film.

Page 24: unit 1 AVE ENGINEERING

Results- creation of very large number of tiny MOS capacitors on either surface

ofchip.

Fig:1.12

1.21 COMPARISON OF CAMERA TUBES

Page 25: unit 1 AVE ENGINEERING

1.22 CAMERA TUBE DEFLECTION UNIT

Page 26: unit 1 AVE ENGINEERING

Fig:1.13

It mounts itself inside a deflection coil unit which consists of focusing coil,

horizontal and vertical deflection coils, alignment coils and magnets.

The focusing coil surrounds entire tube extending from electron gun to face plate

of tube.

It produces axial field because of dc current passing through it.

The horizontal and vertical deflection coils are pair of coils each in a shape of

yokes mount on pick-up tube.

The horizontal deflection coils produce a vertical field and vertical deflection

coils produces horizontal field.

The field strength of deflecting magnetic field is about 1/10th of focusing coil.

The required currents have to be supplied by deflection drive circuits of camera

chain.

The alignment coils are a pair of coils positioned just outside the limiting aperture

that produce a magnetic field at right angles to the tube axis.

1.23 VIDEO PROCESSING OF CAMERA SIGNALS

Digital video comprises a series of orthogonal bitmap digital images

displayed in rapid succession at a constant rate. In the context of video

these images are called frames. We measure the rate at which frames are

displayed in frames per second (FPS).

Page 27: unit 1 AVE ENGINEERING

Since every frame is an orthogonal bitmap digital image it comprises a raster of

pixels. If it has a width of W pixels and a height of H pixels we say that the frame

size is WxH.

Pixels have only one property, their color. The color of a pixel is represented by a

fixed number of bits. The more bits the more subtle variations of colors can be

reproduced. This is called the color depth (CD) of the video.

An example video can have a duration (T) of 1 hour (3600sec), a frame size of

640x480 (WxH) at a color depth of 24bits and a frame rate of 25fps. This example

video has the following properties:

1. pixels per frame = 640 * 480 = 307,200

2. bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits

3. bit rate (BR) = 7.37 * 25 = 184.25Mbits/sec size = 184Mbits/sec * 3600sec =

662,400Mbits = 82,800Mbytes = 82.8Gbytes

Fig:1.14

REFERENCE:

 1. A-M-Dhake-" Television and video Engineering” second Edition TMH 2003

Page 28: unit 1 AVE ENGINEERING

2. R-R-Gulati-"Modern Television Practice -Technology and servicing -second edition – New age International publishes -2004.