Highlights of my 48 years in optical design

Preview:

DESCRIPTION

Some highlights of my 48 year (so far) career in optical design.

Citation preview

My 48 years of optical design – some highlights

Dave ShaferDavid Shafer Optical Design

As a young boy I was always fascinated by magnifying glasses

Optics is kind of like magic

Some kinds of flashlight bulbs have a very small glass lens on their tips.

I used to carefully break the end off with a hammer and use the tiny lens as a high power magnifying glass – about 50X magnification

I also made water drop microscopes. A small drop of water can very easily give 100X magnification, but it has to be held up extremely close to your eye for you to see through it.

The first single lens microscope, from 300 years ago, had a tiny glass lens and was about 250X, but a water drop works well too.

I lived on a small farm, until I went to college. We had 5,000 chickens

We also had one cow, and I did not drink pasteurized milk until I went to college.

Our farm was very far from city lights and the night skies were very dark – perfect for astronomy. Many people have never seen a really dark sky.

When I was 13 years old I got a mail-order kit for grinding and polishing a 150 mm aperture telescope mirror.

I bought a small star spectroscope (100 mm long) and drew charts of the solar spectrum, with its many absorption lines. Now, over 50 years later, that exact same spectroscope costs about 10X more money.

I was hooked on optics! When I was 16 years old I got Conrady’s two books on lens design.

I also got a book that was full of complicated diagrams like this one. It made optics look pretty difficult.

When I was in high school there were no personal computers yet and no large main frame computers that were available to the general public. I traced a few light rays through an achromatic doublet lens, with trigonometric ray tracing using tables of 6 decimal place logarithms. After you do that once you never want to do it again! But I still knew that I wanted to be a lens designer.

When I went to college in 1961 big universities had a main frame computer. Data was input using punched cards. At the University of Rochester, where I went, the Optics department was able to use this computer and students like me were able to do some simple lens design problems.

At that time, in 1961, there were only two places in the Western hemisphere where you could get an undergraduate degree in optics - The University of Rochester, where I went, and Imperial College in London. In addition to learning about optics, I also met my wife there at the University. We have been married 47 years, have two children and five grandchildren. My son worked at ASML for 20 years and was a vice-president there when he changed jobs a few years ago. My daughter is a university professor of Art History.

In the 1950’s electro-mechanical calculators (electricity powered the calculating gears) were used to do optical raytracing. To trace one light ray through one optical surface took about 3 minutes. In the early 1960’s true digital computers (main frames) were developed and they could trace one ray-surface per second. Today an ordinary PC can trace about 30 million ray-surfaces per second.

Optimization of optical systems requires matrix inversion. Hand calculations or electromechanical calculators in the 1950’s did 2 X 2 matrix inversions – two variables and two aberrations. Very many of them, in sequence. Today, with my PC, I optimize complex lithographic lenses with many high-order aspherics. I optimize several thousand rays using about 100 variables and there is an enormous matrix inversion – in just a few seconds.

Although computers have revolutionized optical design, there is still a big need for creative thinking by the designer, using your own mental PC

My first job after college, in 1966, was at a small high-tech company that did military optics – mostly very high resolution reconnaissance cameras for the U-2 spy plane and for early space satellite cameras. I worked on a top-secret project there that was a new way to detect Russian submarines. Some years ago this secret technology was declassified and today you can read all about it on the internet.

Submarine with its periscope above the water surface

In World War I and World War II submarines would be found by looking for their periscopes sticking up above the water. Sometimes the sun would reflect off the front surface of the periscope optics, but there was also sun glint off of the water waves and it was very hard to tell them apart.

From an airplane the water wake left by the moving periscope could be seen.

But if the submarine was moving slowly or not at all then the wake was very hard to see, like in this case here.

What was needed was a new and highly sensitive way to spot submarine periscopes, when they were above the surface of the water. The solution was to use optics and lasers in a new, top-secret way.

This new technology was given, back in 1966, the code name “Optical Augmentation” and it is still called that today. You can look it up on the internet.

We all know about red eye from camera flash photos.

The eye retina reflects back the focused light and then it is collimated by the eye lens. It can then travel long distances without spreading very much. That is why flash camera “red eyes” are so bright, like this cat.

Eye retina

Near IR laser beam

Eye retina

Periscope optics

A low power near-IR laser beam was sent out over the water surface, from a ship, and scanned around by 360 degrees. If there is a periscope above the water then the laser light goes down the periscope optics tube and is focused on the eye retina of the person who is looking through the periscope. That light then reflects off the retina, is collimated by the eye’s lens, and reverses its path back up the tube and out. It travels back over the water to the ship where the laser is located and a very bright “red eye” can be seen.

Water level

The energy collection area of the periscope optics is very much larger than that of the eye by itself, so the retro-reflected signal is orders of magnitude larger and gives a huge “red eye” effect.

You may find this hard to believe but with this relatively simple technology a submarine periscope can be detected that is many kilometers away. The laser used is near IR instead of a visible wavelength so that the person looking through the periscope will not know that they have been detected.

This same technology can be used in other ways. Airplanes can detect the eyes of soldiers looking through the sights of camouflaged anti-aircraft guns. Film or a detector array at the focus of a camera also reflects back light and that is then collimated by the camera lens on the way back out. Hidden cameras can be found this way. From the ground level a laser can detect space satellite camera optics. A pulsed laser can actually measure the distance to a hidden camera, telescope, or periscope.

Today you can buy several versions of this declassified technology on the internet for less than $100 and find hidden cameras in your hotel room or other places, especially those tiny pin-hole sized cameras - like on cellphones.

Early warning missile defense system

(Work I did in 1972, 42 years ago).

In 1971 I changed jobs and worked for a company that specialized in infra-red military optics. One project was this ----

If a missile from behind the earth comes over the rim of the earth it will be seen here by a satellite against a black sky, but it will be very close to an extremely bright earth, which gives an unwanted signal that vastly exceeds the missile’s infra-red heat signal. But that is the easy case. Much worse is when the satellite is on the night side and the missile is seen against a sun-lit earth’s limb.

With the sun behind the horizon, the earth’s limb is ten orders of magnitude brighter than the missile’s infra-red heat signal.

Astronomers use a special kind of telescope, a coronoscope, to look at the sun’s corona. They need to block out the light from the body of the sun and just look at the sun’s edge. This is possible using a “Lyot stop” and this very old technology was used in missile defense satellite optics. It can block out very bright light that is just outside the field of view of the telescope and which is being diffracted into that field of view. That unwanted diffracted light can be many orders of magnitude brighter than the dim signal that the telescope wants to see, in its field of view.

Rim of aperture stop is source of diffracted light

Light from earth limb

Second aperture stop is smaller than image of first stop, and it blocks out-of-field diffracted light from earth limb.

Lyot stop principle

Two confocal parabolic mirrors give well-corrected afocal imagery

Field of view rays

Diffracted light is focused unto second aperture stop

The use of the Lyot stop principle, plus super-polished optics, makes it possible to reject almost all of the extremely bright unwanted signal from the sun and the earth’s limb and to just see the missile signal.

I worked on some space optics systems to make accurate measurements of the earth limb signal profile, as well as some wide angle reflective space-based telescopes for reconnaissance.

I also designed optics for medical infra-red imaging systems. The infra-red heat temperature map of a person’s face or other parts of the body can often show different kinds of illness, including cancer. There is no physical contact with the patient, just infra-red optical imaging.

Display shows temperature as different colors.

In 1975 I changed companies again and went to work for Perkin-Elmer Corp., a maker of laboratory instruments. They were just starting to get into making some lithographic equipment.

Their “Micralign” optical system made it possible to make 1.0u circuit feature sizes on 75 mm diameter silicon wafers, using mercury i-line light from a lamp. This was a 1.0 X magnification system. I designed a next generation 5X system that was able to make .50u feature sizes. The 5X magnification made the mask easier to make.

An ant holding 1.0 mm square chip, with tiny circuit features. What plans does the ant have for this chip?

It is hard for us to imagine how small one micron really is.

A guitar made the same size as a red blood cell, using nanotechnology

nanotechnology

30 years ago computer chips had circuit features one micron in size

One micron

Today’s chips have about .03u circuit features

In 1976 I also worked on early experiments in

Laser Fusion

Laser fusion, if it ever works, will be about as cost effective a way to produce energy as it is to go to the moon in order to get some sand for your children’s sandbox. It’s main use, if it works, will probably be to test the physics of new nuclear bomb designs. My work was in the very early days of laser fusion, around 1976.

Conic mirrorConic mirror

Highly aspheric lens

Target pellet

Very high power laser beams enter from opposite sides and are focused onto the tiny target pellet.

Laser beam Laser beam

Target pelletfilled with tritium gas

Laser fusion ignition at 100 million degrees

Conic mirrorConic mirror

Highly aspheric lens

Target pellet

The highly aspheric lens was made of the highest possible purity glass but it would still absorb enough of the very high power laser energy so that it would often explode!

Laser beam Laser beam

I thought of a new type of design where there are two reflections from the mirrors instead of one, before focusing on the target pellet. The result is that the focusing lens is much thinner, with very much less asphericity and it does not explode. It is also much less expensive to make.

An identical ray path is not shown here for this side of system

One of my first patents, in 1977, was for an unusual kind of telescope that only has spherical mirrors.

Many years later one of these unusual telescopes was sent on the Cassini space craft to Saturn. Later another one went to the asteroid Vesta, and it was there just a few months ago, taking photographs.

This is the Cassini space craft before being launched. Another of my telescopes is on a space mission to visit a comet and fly up close to it.

Close up of asteroid Vesta, taken recently from space with my telescope.

In 1980 I started my own one-person optical design business. This was very unusual, back in 1980 and is still not very common today in the USA. In most other countries it is very rare. It was possible for me because I had lots of business right away in lithography optics design, with some companies like Tropel, Ultra-Tech, and Perkin-Elmer.

My early design work back then was done on an Apple computer, using the OSLO design program.

51

Salvador Dali

Spanish Surrealist artist

One very interesting short project I had, in 1980, was for the artist Salvador Dali.

Salvador Dali had managed to paint a stereo pair of paintings, which is an amazingly difficult thing to do. He wanted a new type of stereo viewer to go with his unusual painting pair. The paintings would be on a wall and then a person would look at them with a stereo viewer that could be adjusted for the viewer’s distance from the paintings.

Deviating prism wedges can make a stereo viewer but they have a lot of dispersive color and mapping distortion and are not adjustable. I realized that a different ray path

through a prism can have no color, no distortion, and be adjustable.

The final viewer was just two 45-90-45 degree prisms with a flexible hinge that joined them along one prism edge. They could be folded up, when not being used, into a larger size triangle. I did not have to go anywhere near a computer to do this design project!

Even a very simple optical element, like a simple prism, can show some surprising features. Suppose you place two photos against a right-angle prism. Viewing angle #1 sees the photo on the bottom of the prism. Viewing angle #2 has total internal reflection off the bottom side and sees the other photo. Both images seem to be in the same place – on or below the bottom side of the prism.

In this different arrangement there are 4 different views. View # 1 is a photo that you see directly. View #2 is a photo on the back side of that first photo and you see it by two internal reflections inside the prism. View #3 sees a photo on the bottom, after one internal reflection. View #4 is a photo that is seen through the prism with no reflections.

After I started my company in 1980 my optical design work has included camera lenses, medical optics, telescopes, microscopes, and many other systems. Since 1996 almost all of my work has been lithographic designs for Zeiss, in Germany, and wafer inspection designs for KLA-Tencor, in California.

A typical lithographic 4X stepper lens design, from 2004. It is .80 NA, 1000mm long, has 27 lenses and 3 aspherics. The 27 mm field diameter on the fast speed end has distortion of about 1.0 nanometer, telecentricity of about 2 milliradians, and better than .005 waves r.m.s. over the field at .248u. More modern designs have more aspherics and fewer lenses.

These lithographic stepper lenses are made by Zeiss and then put into ASML chip-making machines.

These state of the art stepper lenses cost about $20 million each and many hundreds have been made by Zeiss and sent to ASML. In 2006 I invented a new type of design that combines mirrors and lenses and it is now the leading-edge Zeiss product, making today’s state of the art computer chips.

I have several patents on this new kind of lithographic system, that combines lenses and mirrors. Many of the lenses are aspheric, to reduce the amount of surfaces and glass volume. Some of these designs have 4 mirrors and some have 2 mirrors. One important characteristic of these designs is that there are two images inside the design, while conventional stepper lenses have no images inside the design. These are immersion designs, with a thin layer of water between the last lens surface and the silicon wafer that is being exposed. The design being made today by Zeiss is 1.35 NA and works with .193u laser light. They will not say, and I won’t either, if it looks like this design here or one of my other patents.

Aspheric mirror

Aspheric mirror

wafer

mask

With my latest version of this lens/mirror design and double-patterning exposures it would be possible to write a 300 X 300 spot image onto an area the size of single red blood cell – more than enough to etch a good photo of yourself, or to write an office memo, onto that surface.

Red blood cells, 8u across

For some years I have been working for Zeiss on EUV (X-ray) lithography, which will be the next generation of lithography systems. This only uses mirrors.

The aspheric mirrors made for these high-performance optical systems are aligned to a precision of a few millionths of a millimeter (i.e. nanometers). Their surface figure quality (admissible deviation from the exact mathematically required surface) and the surface roughness are approximately three or four times the diameter of a hydrogen atom. (!!!!!!!)

63

All-silica broadband design

For KLA-Tencor I have developed new designs for wafer inspection that cover an enormous spectral region with only a single glass type.

wafer

.266u through .800u

Prototype, made by Olympus, .90 NA, wavelength = .266u - .800u

Long working distance design for deep UV. Color correction with all silica elements

Some inspection situations require a long working distance

Any questions?

Recommended