A Brief History of Lens Design BqA wo
Since about 1960, the way lenses are designed has changed profoundly as a result of the introduction of electronic digital computers and numerical optimiz-ing methods. Nevertheless, many of the older techniques remain valid. The lens designer still encounters terminology and methods that were developed even in previous centuries. Furthermore, the new methods often have a strong classical heritage. Thus, it is appropriate to examine, at least briefly, a history of how the techniques of lens design have evolved. @!HMd{r
\V\ET
A.2.1 Two Approaches to Optical Design <.XoC?j
fBh|:2u
The equations describing the aberrations of a lens are very nonlinear func-tions of the lens constructional parameters (surface curvatures, thicknesses, glass indices and dispersions, etc.). Boundary conditions and other constraints further complicate the situation. Thus, there are only a few optical systems whose con-figurations can be derived mathematically in an exact closed-form solution, and these are all very simple. Examples are the classical reflecting telescopes. U.} =j'Us+
This predicament has produced two separate and quite different approaches to the practical task of designing lenses. These are the analytical approach and the numerical approach. Historically the analytical dominated at first, but the numer-ical now prevails. 5fv6RQD
Neither approach is sufficient unto itself. A lens designed analytically using aberration theory requires a numerical ray trace to evaluate its actual perfor-mance. In addition, an analytically designed lens can often benefit significantly from a final numerical optimization. Conversely, a lens designed numerically cannot be properly understood and evaluated without the insight provided by ab-erration theory. WZ-{K"56
A+*(Pds
A.2.2 Analytical Design Methods bv" ({:x
.tZ$a_O
The first lenses made in quantity were spectacle lenses (after about 1285). /P}tgcs
Later (after 1608), singlet lenses began to be made in quantity for telescopes and microscopes. Throughout the seventeenth and eighteenth centuries, optical instru-ments were designed primarily by trial and error. As might be expected, optical flaws or aberrations remained. Note that, aberrations are fundamental design shortcomings, not fabrication errors. Eventually it became clear that understand-ing and correcting aberrations required greater physical understanding and a more rigorous analytical approach. .y/?~+N^
At first, progress was slow and the methods largely empirical. Later, math-ematical methods were introduced, and these were much more effective. The most outstanding early work on optical theory was done by Newton in 1666. Among the somewhat later pioneers were Fraunhofer, Wollaston, Coddington, Hamilton,and Gauss. A major advance was made by Petzval in 1840. Petzval was a mathemati-cian, and he was the first to apply mathematics to the general problem of design-ing a lens with a sizable speed and field for a camera. The techniques he devised were new and fundamental. His treatment of field curvature based on the Petzval sum is still used today. Just as unprecedented, he was able to completely design his very successful Petzval Portrait lens on paper before it was made. jl29~^@}1i
In 1856, Seidel published the first complete mathematical treatment of geo- metrical imagery, or what we now call aberration theory. The five primary or third-order monochromatic aberrations are thus known today as the Seidel aber-rations. They are: itMc!bUQ
1. Spherical aberration 'B:De"_(N
2. Coma KAEpFobYo
3. Astigmatism J=bOw//
4. Field curvature td$Jx}'A
5. Distortion. TyXOd,%zl
There are also two primary chromatic aberrations. These are wavelength-de-pendent variations of first-order properties, and they are often included with the Seidel aberrations. They are: m5g: Q
6. Longitudinal chromatic aberration )Em,3I/.l
7. Lateral chromatic aberration. HYa!$P3}[
Petzval, Seidel, and many others in subsequent years have now put aberra-tion theory and analytical lens design on a firm theoretical basis.注释1 7-B'G/PS/
Until about 1960, the only way to design lenses was by an analytical ap-proach based on aberration theory. Unfortunately, by its nature, aberration theory gives only a series of progressively better approximations to the real world. Thus, the optical designs derived from aberration theory are themselves approximate and usually must be modified to account for the limitations in the process. S8<aq P
#>NZN1
Today, most lenses are designed, not with analytical methods, but with com-puter-aided numerical methods. Nevertheless, the analytical methods remain ex-tremely valuable for deriving or identifying potentially useful optical configurations that can serve as starting points for further numerical optimization. YH$`r6\S
Even more important, aberration theory can explain what is happening. It is only through aberration theory that a lens designer can understand the underlying op-eration of lenses. K?eo)|4)DB
;!Bkk9r"H
A.2.3 Numerical Evaluation Methods *Y?]="8c#;
OPh@H.)^
Part of the job of designing a lens is evaluating its performance as the design evolves. And of course, the performance of the final design must be thoroughly characterized. Aberration theory is useful in giving approximate indications, but a rigorous image evaluation requires a different, exact approach. sTY l' Ieg
Note that unless or until a prototype model is made, the design exists only on paper. Thus, to evaluate the paper design, a mathematical procedure is neces-sary. The most exact mathematical evaluation procedure is numerical and as-sumes only trigonometry and Snell's law. hZG{"O!2s
Snell's law for refraction was discovered experimentally by Snell in 1621 and states that for a ray incident on and refracted by lens surface i (even subscripts for surfaces, odd subscripts for spaces), ,sincpf-_, = «/+lsin<p/+1 (A.2.1) where nt_ x and ni+, are the refractive indices of the bounding media, and <p z _, and pl + J are the angles of incidence and refraction in the ray plane. The law of reflection for mirrors was known by the ancient Greeks, and is a special case of Snell's law if ni+l equals Evaluating a lens numerically involves tracing many real (or trigonometric) 2M`Ni&v
Z)~4)71Y:
late 1940s, this manual approach to ray tracing began to change. One of the first jobs given to these new machines was trigonometric ray tracing. But the early computers were hard to get time on, hard to program, expensive, and not all that fast. Even as late as the early 1960s, a company doing lens design would have to make the economic decision whether it was cost-effective to buy time on one of the big computers, or better to hire someone to trace rays by hand with a desk cal-culator and seven- or eight-place trig tables. >qZRIDE5$
That did not last much longer. The growth of the capabilities of computers has been explosive since about 1960, as has their availability. Soon, computers completely eliminated manual fay tracing. By 1998, a fast Pentium-Pro personal computer with the ZEMAX or similar program could trace about 600,000 skew ray surfaces per second. And by the time you read this, 600,000 will be ancient history. l,8|E
wpmtv325
The author had his first course in lens design in 1963. The professor, Walter Wallin, who also ran his own lens design company, related that he was once asked in all seriousness, "But sir, did you specialize in this from choice?" As with den-tistry, absolutely no one today gets nostalgic for the "good old days" of lens de-sign. 0nn#U
RH'R6
A.2.4 Optical Design Using Computer-Aided Numerical Optimization N.rB-
v:b%G?o
The advent of electronic digital computers did much more than allow rays to be easily traced. Since the mid-1950s, a few pioneers had been working on new numerical algorithms to do what was then called by the misnomer automatic lens design. These methods, which we now call computer-aided lens design, became widely known in 1963. 1 crjRbi
2 Commercial computer programs using these methods, such as ACCOS (Automatic Correction of Centered Optical Systems), became available soon after. Thus, starting in the mid-1960s, lens designers could use computers, not just to evaluate a lens, but to change lens parameters to improve optical performance. w|#79,&
This was truly a revolution. Lens designers used to struggle with a design until image quality was "good enough." Now, when given a starting lens config-uration, the computer can by an iterative process optimize the lens. After optimi-zation, image quality is the best that the lens can produce under the constraints of basi c configuration, required focal length,//number, field of view, wavelengths, and so forth. Furthermore, the preferred criteria for good image quality are based on trigonometrically traced real rays. Thus, computer optimization is as exact as ray tracing allows. "fwuvT
1
The first benefit from computer optimization was that many older designs ThB2U(Wf
2 Donald P. Feder, "Automatic Optical Design," Applied Optics, Vol. 2, No. 3, pp. 1209-1226, De-cember 1963. EaL+}/q&
'93&?
were recomputed to achieve major performance gains. Complex designs benefited most; the simpler ones were already fairly well optimized. Older designs were also simplified to ease production, use fewer elements, use fewer types of glass, be smaller and lighter, and cost less. Even more interesting, the new optimizing techniques allow the develop-ment of new design forms. These may be extensions of older forms, but they also may be wholly new forms discovered by the computer in its quest for better solu- RmCR"~
tions. Fast wide-angle lenses and sharp wide-range zoom lenses are only two ex-amples of current lens types that were virtually unknown in 1960. Ric$Xmu
The numerical method of designing lenses does have a limitation, however. Although the software writers are very skillful and their optical programs have amazing capabilities, the computer's basic design approach is still only a sophis-ticated search algorithm. In particular, the computer has no true optical under-standing or intelligence. This intelligence must be supplied by the designer ]sE^=;Pv?
through his selection of the starting optical configuration, through his control of the computer program, and especially through his understanding of the underlying optical theory. '?4[w]0J<
But it is exactly this human intelligence that today's fast computers and in-teractive software exploit. It is now a relatively easy matter for the lens designer to try out different optical ideas inside the computer. In only a short time, the de-signer can determine which of his ideas are the better ones that should be pursued. o!&*4>tF
Thus, the computer does not make the human designer obsolete. Rather, the computer plus optimizing software change the way the work of the lens designer is done and the quality of the final results. The computer removes the drudgery ?whp_
and becomes a powerful new tool to be used by the designer for new lens design creativity. {QJ`.6Kt
geometrical rays through the system from the object to the image. For each ray, Snell's law is applied as the ray encounters each lens surface in turn. The calcu-lations are repeated again and again at surface after surface for ray after ray. The locations of the piercing points of these rays on the image surface are then used to calculate various measures of image quality. N9Vcp~;
At first, logarithms were used to do the calculations. After the introduction of mechanical desk calculators around 1930, direct trig tables were used. With a desk calculator, it took an experienced person about five minutes to trace one me- ridional ray through one spherical surface (assuming no errors). The time to trace a skew ray, which lay out of the meridional plane, was more than twice as long, and thus tracing skew rays was rarely done in those days. Often a prototype model was indeed made, so great was the computational burden and tedium (you can view a prototype as an analog computer). With the introduction and development of electronic digital computers in the 注释1 For the reader interested in analytical lens design methods, see A. E. Conrady, Applied Optics and Optical Design, Vols. 1 and 2, and Rudolf Kingslake, Lens Design Fundamentals. KcE=m\ h