Lightographer

From Rays to Cones — A Short History

This is a quick walk from the classical, ray-based picture of optics to the cone-based, zero-phase view we use in Lightographer. It isn’t a full textbook; it’s a map that shows where our perspective fits, what it keeps, and what it replaces.

Kicker: For centuries, optics favored lines that bend (rays) because they’re easy to draw and measure. We keep the geometry, but we center what the rays hid: every micro-point is a broadband cone, and a truthful lens preserves equal optical paths so the cone lands intact — zero-phase. That’s what makes images feel crisp and space feel real.

1) Before waves: geometry and sight

Ancient & medieval foundations

Early optics was largely geometric and philosophical. Euclid described lines of sight. Ptolemy and Galen mixed vision with anatomy. The breakthrough was Ibn al-Haytham (Alhazen) in the 11th century: light travels from object into the eye; vision is not rays shooting out.

Renaissance to early modern

Lenses become practical tools: spectacles, then telescopes and microscopes. Kepler clarifies image formation on the retina. Optics becomes a craft of paraxial geometry — small angles, straight lines that bend at surfaces. Rays rule, because they’re calculable.

2) Waves arrive — and interference

Huygens (17th c.) proposes wavefronts; Fresnel (19th c.) brings interference and diffraction into focus. Young’s double-slit reveals stable spatial patterns. Meanwhile, Maxwell unifies electricity and magnetism: light is electromagnetic waves. Theory blossoms — but practice often sticks with rays (easier for lens shops and draftsmen).

3) The age of glass: Abbe to Rudolph

Industrial optics matures. Ernst Abbe ties resolution to wavelength and aperture (the “diffraction limit”), invents the sine condition, and standardizes optical instruments with Zeiss. Designers swap and stack elements to tame aberrations (spherical, coma, astigmatism, field curvature, chromatic error).

The late 19th/early 20th century gives us the great archetypes: Cooke Triplet (simple, elegant correction) and the Double-Gauss family (Paul Rudolph and successors), the backbone for fast normal lenses. How were they made? A mix of mathematics, glass catalogs, test benches — but also a lot of empirical iteration. In effect: visual calculators in the workshop, long before computers.

4) Computation takes over (but keeps the rays)

From mid-20th century on, computers accelerate design. Millions of rays trace through glass in seconds. MTF curves, spot diagrams, and merit functions become normal. Coatings improve contrast. Yet: the core language is still rays, and phase/coherence is mostly pushed into “diffraction blur” and “tolerances.”

5) What was missing — and what we propose

The hidden constant: every point broadcasts

White light is not one thing; it’s a massive rainbow of concurrent frequencies. Each micro-point absorbs some, reflects the rest — its spectral fingerprint. That fingerprint leaves the point in a sphere, and toward the lens it arrives as a cone. The cone is given by nature, not created by the lens.

Zero-phase = believable space

A picture feels “crisp and real” when the frequencies in each cone arrive in step. That is a statement about equal optical paths across the bands we capture. We call that condition zero-phase focusing: the cone lands intact, colour integrity holds, and spatial relationships feel right. If the paths spread, colours smear and depth collapses — even when charts still say “sharp.”

Focus selects, aperture gates

Focus is not destruction but selection: moving the lens chooses which distance’s cones collapse on the sensor. Aperture is the volume gate: wide shows more separation between layers; stopped down compresses layers, but the 3D stack still exists. A good lens therefore holds a 3D volume of spatial information. We simply step through it.

Translation for everyday photographers:
“Every point is lit by the whole rainbow. A good lens keeps those colours aligned so the point lands as itself. Turning focus just chooses which distance becomes perfectly itself; the rest stays soft but honest.”

6) Why light beats scans (the radar aside)

Radar/LiDAR is active and narrowband (few wavelengths, scanned in time). White light is passive and broadband (thousands of wavelengths, all at once). Each micro-point returns a rich fingerprint in a single cone. Our cameras compress it to RGB, but the origin is a parallel, massive-frequency measurement.

7) Then vs Now — the difference in one view

Then (ray-centric)

  • Rays are lines that bend at surfaces.
  • Focus = where rays cross a plane.
  • Aberrations = geometry errors to be minimized.
  • Diffraction = limit blur; phase mostly ignored.
  • Design verified by charts (MTF, spots), not by spatial feeling.

Now (cone + zero-phase)

  • Every micro-point sends a broadband cone.
  • Focus = the distance whose cones arrive in step (equal optical path).
  • Aberrations = anything that breaks cone coherence (colour & depth integrity).
  • Diffraction acknowledged, but phase/coherence made central.
  • Design judged also by crispness (colour integrity + believable space).
Mantra: Every point broadcasts; the lens preserves; focus selects.

8) Where this leaves the classics

None of the history is “wrong.” Rays, formulas, and MTFs are still useful tools. Our addition is a shift of center-of-gravity: make the cone and its phase integrity the first-class citizen. When you do, certain lenses (often the humble ones) suddenly feel alive — not because they win every chart, but because they keep cones whole.

That is the Lightographer difference: we walk inside the 3D volume a lens holds, instead of flattening it to lines. Once you’ve seen it, it’s hard to unsee.

← Back to the landing page