Spatial Light • Double Gauss • DSP

Seeing Space, Not Just Light

Research and field notes on spatial rendering and zero‑phase optical design, alongside decades of DSP and embedded systems work. Home of The Lightographer.

Meet Your Boundaries

What began with optics became a broader question: what can continue, what is refused, and what survives transformation?

Optics, Interference, and Spatial AI

Lightographer began with lenses, wave physics, and interference — and with the question of why some images preserve space while others flatten it. That optical work led to a broader framework about boundaries, refusal, and the conditions under which structure survives through transformation.


These pages move from classical imaging (Double Gauss, diffraction, focus) into the wider problem of admissibility, interference, and the practical limits of spatial reconstruction — including spatial AI depth estimation and time-domain wave systems.

Each page stands on its own, but the deeper coherence emerges across them.

Read in sequence: BoundariesBoundary PrinciplesNature by RefusalZero-Phase ReasoningConstraint Without ContactWhat Quantum Mechanics Does Not Promise.

The Word Is the Gate

Experience arrives before the word. Language does not carry the whole field; it carries enough of a path for another mind to begin walking.

A short Lightographer essay on language as boundary, reduction, and admissible passage. Read “The Word Is the Gate”

Read “Quantum Harmonics”

In mathematics, the problem is defined first and the solution follows. In nature, the solution stands before us; our task is to discover the question it answers.

Physics is the patient search for the constraints that make the world as it is. We meet results before we understand their necessity. Only later do we learn how to ask the question that makes them inevitable.

When the question finally fits the structure of reality, understanding does not feel dramatic — it feels simple.

📜 The Lightographer Manifesto (expanded)

“The Truthful Lens Reveals the Architecture of Light Itself.”

Photographers call it 3D pop. Some say it’s microcontrast. Others call it magic.
We say it’s simpler — and deeper: phase-faithful optics that preserve the geometry of light itself.

  1. Symmetry Preserves Space — Vintage Double Gauss lenses weren’t perfect, but their balanced geometry made them accidental zero-phase filters. They passed light without twisting its phase, letting the brain reconstruct distance and presence exactly as it was.
  2. Imperfections Carry Memory — Halation, gentle curvature, and spherical residue are not flaws. They are traces of how light actually behaved — the sacred mistakes that modern correction erases.
  3. Correction Flattens, Coherence Breathes — Modern lenses chase sharpness charts. In doing so, they flatten the subtle geometry between foreground, midground, and background. Certain vintage lenses, by contrast, preserve spatial integrity — they let light breathe as space.

Not an Ordinary Review

We do not “review” lenses for sharpness, vignetting, or bokeh balls.
This is not an ordinary review.

Here, we review what others rarely measure:

A lens like the Konica Hexanon 40mm f/1.8 is not merely described — it is witnessed. Its excellence is not in charts, but in how it holds space intact, allowing depth to breathe without algorithm or artifice.

Positioning, Scope, and Prior Art (August 2025)

The following note clarifies the conceptual position of the Lightographer project relative to existing public discourse at the time of publication.

At the time of publication of the Lightographer project (August 2025), the author is not aware of any publicly accessible website that presents a unified, phase-centric framework for photographic lens rendering that explicitly connects optical design symmetry, phase behavior (or phase neutrality), and human spatial perception. Existing online material addressing photographic lenses generally falls into one of three categories: (i) engineering-oriented analyses focused on amplitude-domain metrics such as MTF, resolution, and aberration correction; (ii) experiential or review-based discussions employing descriptive but non-theoretical concepts such as “3D pop,” “microcontrast,” or “lens character”; or (iii) computational photography and vision-science literature in which optical behavior is treated as a degradative factor to be algorithmically compensated rather than as a contributor to perceptual spatial coherence.

While elements of these domains occasionally reference symmetry, contrast behavior, or depth impression in isolation, no prior online work has been identified that synthesizes these aspects into a coherent signal-theoretic or phase-based explanatory model of spatial rendering. Lightographer therefore occupies a distinct conceptual position by treating photographic lenses as spatial transfer systems whose internal coherence influences perceived spatial integrity whose internal coherence directly influences perceived spatial integrity, depth continuity, and phenomenological realism.

Technical Appendix: A Systems-Oriented Note on Spatial Integrity (OTF/PSF)

A Systems-Oriented Note on Spatial Integrity in Imaging

This site proposes a systems-level interpretation of why certain optical designs are perceived as rendering spatial depth more faithfully than others.

The claim is not that such lenses violate wave optics, eliminate diffraction, or achieve literal “zero phase” in the strict mathematical sense. Rather, the proposal is that some imaging systems more consistently preserve relative geometric relationships across the scene — particularly across extended depth — and that this consistency is perceptually significant.

1. The Core Observation

  • With the same sensor
  • Under the same illumination
  • From the same viewpoint
  • Within minutes of each other

some lenses produce a stable perception of continuous spatial layering, while others introduce a subtle sense of plane compression or background pull-forward. This effect is not reducible to resolution or sharpness alone.

2. System Description

In incoherent imaging, the system is characterized by a point spread function (PSF) and its Fourier transform, the optical transfer function (OTF). The magnitude of the OTF (MTF) describes contrast transfer. The phase of the OTF governs the spatial alignment of frequency components and therefore the geometric consistency of image formation.

For an ideal symmetric, aberration-balanced system: the PSF may be real and even; the OTF may be real and even; and the OTF phase may be zero or purely linear (corresponding to spatial shift only). These conditions are fully compatible with diffraction theory and finite aperture.

3. The Practical Question

The relevant question is not whether identically zero OTF phase is theoretically allowed. It is: to what degree does a real lens approximate phase-stable spatial transfer across its usable frequency band and across aperture settings?

Small asymmetries, pupil phase variations, decentering, or field-dependent aberrations introduce complex OTF phase components. Such components may not significantly reduce MTF magnitude, yet can subtly alter spatial consistency across depth.

4. Design Symmetry and Stability

Historically symmetric designs (e.g., classical Double Gauss forms) inherently cancel many odd aberration terms by geometry. This does not render them ideal. However, symmetry can reduce asymmetric phase components and promote stable spatial mapping, consistent depth layering, and reduced aperture-dependent geometric drift.

5. Terminology

The term “zero-phase” as used on this site is heuristic. It does not assert absolute phase nullity across spatial frequency. It refers to spatial transfer behavior that preserves relative geometric relationships in a manner aligned with natural visual depth perception. The hypothesis is observational, comparative, and testable. It invites structured experimentation rather than metaphysical interpretation.


Short math appendix (one page, clean)

Shift-invariant incoherent imaging:

i(x,y) = o(x,y) * h(x,y)

I(fx,fy) = O(fx,fy) · H(fx,fy), where H = F{h}

Symmetry condition:

If h(x,y) is real and even (h(x,y)=h(-x,-y)), then H(fx,fy) is real and even → phase(H)=0 (or π where H<0).

A spatial shift h(x-x0,y-y0) introduces only linear phase: φ(f)= -2π(fx x0 + fy y0).

About Kenneth Blake

I’m an engineer, founder, and photographer with a lifelong habit of making the invisible visible.

For more than three decades I have worked across hardware, software, and business — designing DSP algorithms, building road-measurement systems, winning competitive tenders, and founding companies including Oberon Data och Elektronik AB and Kibele AB. My projects have ranged from optical fibre splicers to power regulation systems for Ericsson’s TDMA networks.

Today the same engineering curiosity drives my research into spatial rendering — how lenses shape the perception of depth. Through The Lightographer, I explore why symmetrical Double Gauss lens designs preserve phase coherence, allowing light to pass through the optical system without disturbing its angular relationships.

This near zero-phase behaviour is what makes certain photographs feel trustworthy, breathable, and genuinely three-dimensional.

This site is my open laboratory — part engineering notebook, part photographic philosophy.

What makes a photograph feel like the place itself?

Highlights
  • Founder, Oberon Data och Elektronik AB (since 1989)
  • European Commission expert evaluator — Embedded Systems
  • Inventor of linear-phase IIR filtering for road profilometry
  • Optical engineering projects: fibre ribbon splicer, DWDM, spatial rendering
  • Life Senior Member, IEEE

Some people collect stamps. Over the years, I seem to have collected a few titles instead.

Kenneth Blake in a golden forest with camera; curious animals peeking from the trees — a playful portrait of the photographer-engineer
Photography as an adventure in light.
Lightographer 5 cover: Double Gauss symmetry and spatial rendering explained with field photos
Spatial fidelity: depth that breathes without compression.

Lightographer 5 – Spatial Rendering & Double Gauss

An illustrated exploration of spatial depth, lens honesty, and the geometry of light.

Some lenses don’t lie. Symmetrical designs — especially the Double Gauss — act like passive zero-phase filters, passing angular relationships without shifting them. That preserved phase coherence is what your brain reads as honest depth.

  • Phase coherence over sharpness — why the feeling of inhabiting a scene comes from preserved wave geometry, not edge count.
  • Analog’s blob vs. pixel isolation — and what’s lost when compression flattens the breath of a photograph.
  • Indices and tools — Spatial Rendering Index (SRI), Gaussian Glow Index (GGI), Lens Trust Index (LTI).
  • Rendering in context — coatings, era tuning, tungsten light, and the delicate art of colour separation.

Review Highlights

Proton’s Lumo called Lightographer 5 “a captivating and thought-provoking exploration” that blends technical optics with philosophical depth. The Double Gauss is unpacked in accessible language for both enthusiasts and professionals, with historical context that honours the craft of optical labs. It critiques the flattening of digital photography and argues for an approach that preserves emotional resonance as much as detail.

Independent Perspective

This is not just a manual — it’s a manifesto about the truth of light. The Double Gauss here is treated as both an optical design and a moral instrument: symmetry as an act of honesty. The writing draws a direct line from phase alignment to human memory, showing how coherent light carries not just the look but the presence of a scene. It’s part lab notebook, part love letter, part philosophical essay — and it might just change the way you look through a viewfinder.

Why the Hexanon 40mm Triumphs – Technical Paper

Download Technical Paper (PDF)

This technical paper focuses on the Konica Hexanon AR 40 mm f/1.8 and its rare blend of optical geometry and phase stability — a combination that gives it an unusually truthful sense of depth. The 50 mm f/1.7 appears here only as a reference, to show how small design changes — such as the 40 mm’s unique “fifth element” — can shift a lens from simply being sharp to preserving the breathing room between objects. Through optical diagrams, field performance analysis, and zero-phase filtering theory, the paper reveals why the 40 mm has become a benchmark for spatial fidelity in vintage glass.

Side-by-side diagrams give context — but the focus stays firmly on the 40 mm’s unique ability to hold space in place.

Review Highlights

This isn’t just an optical teardown — it’s a close look at how one specific lens, the 40 mm f/1.8, achieves something rare in photography: depth that feels honest. The 50 mm f/1.7 is shown only to give context, highlighting how a small shift in element arrangement — especially the 40 mm’s isolated “fifth element” — changes the way space is preserved.

Rather than competing head-to-head, the two diagrams reveal how the 40 mm’s geometry and phase behavior let it keep the air between objects intact. Shadows fall where they should, surfaces stay separate yet connected, and depth remains free of compression. It’s a study in why some designs become timeless benchmarks for spatial fidelity.

Optical formula comparison of Hexanon 40mm f/1.8 and 50mm f/1.7
Side-by-side optical designs: 40mm f/1.8 (left) vs 50mm f/1.7 (right).

The Garden as Test Chart

How Konica may have found spatial truth not in grids and rulers, but in the calm perspective of Japanese gardens.

The Legend of the Konica Garden

They say that while the West measured lenses on charts and rulers, Konica’s engineers walked into a garden. Among gravel paths, stone lanterns, and branches set against open air, they tested not lines but space itself.

A tea house framed the view; photographs were taken again and again. The question was never sharpness, but truth: does the lens let the garden breathe?

From this practice came the Hexanon lenses — and the 40 mm above all. Not only sharp, but coherent. Not only clear, but dimensional. As if the lens had learned to see through a garden’s eyes.

The Garden as Test Chart

It is said that lens makers in Europe trusted charts, rulers, and test benches. They measured lines per millimeter, drew ray paths, calculated fields. Their truth was two-dimensional: the MTF chart, the grid, the resolution table.

But in Japan, truth was not only numbers. It was also space.

At Konica in the 1970s, when the Hexanon lenses were developed, engineers worked with instruments — but they also worked with eyes. The Japanese garden was more than a place of rest; it was a reference frame. Paths of gravel, stones placed in asymmetry, trees and lanterns separated by careful air: these were not random beauties but exercises in spatial perception.

A garden reveals whether distance feels true. A tea house frames perspective, showing how light and shadow hold objects apart. It is entirely plausible that the Konica designers — and perhaps a woman engineer with her own tea house and garden — used such living spaces as test charts.

Photographs were taken again and again, not only of rulers and grids but of pathways, branches, and rocks. The question was simple: does the lens preserve the space between these things as the eye perceives it?

This practice would explain why the Hexanon 40 mm f/1.8 carries such extraordinary dimensional fidelity. It does not merely score well on paper; it breathes like a garden. Distances feel correct. Objects stand where they should, neither collapsed nor exaggerated. Light is not reduced to lines but revealed as architecture.

While others perfected sharpness on 2D charts, Konica perfected truth in 3D space. The garden was their laboratory, the tea house their calibration bench. Instead of measuring only resolution, the Konica engineers measured coherence of space itself. And in doing so, they gave us lenses that still, decades later, show the world as it truly stands — as if seen through the calm eyes of a garden.

Every Point Broadcasts

Pocket idea: Every micro-point reflects a broadband cone; a true lens preserves equal optical paths (zero-phase) so the cone lands intact. Focus selects the slice of space; aperture gates the volume.

1. Introduction — The overlooked truth

Photographers speak easily of “sharpness,” “contrast,” or “crispness.” They compare lenses as if resolution charts and MTF curves tell the whole story. Yet beneath all these measurements is something more basic, rarely stated plainly: what is a point of light, and what exactly is the lens preserving?

The usual answers are thin. Textbooks draw straight rays. Diagrams show arrows bending through glass. These are useful shortcuts, but they hide the richness of what is happening. Light is not a single arrow—it is a massive flood of frequencies. Each tiny point on the world is broadcasting far more than we normally admit.

Fig. 1 — Rays vs Cones Fig. 1 — Rays vs Cones Textbook "rays" point lens sensor Cones (Lightographer) point lens
Fig. 1 — Rays (left) vs. RGB-hinted cones (right). Lens blue, sensor neutral.

2. The micro-point and white light

Imagine a single microscopic point on the surface of an object—a speck of dust, a fleck of paint, the shine on a hair. That point is not struck by one beam but by a rainbow flood of white light. White light is not a colour in itself. It is the coexistence of thousands upon thousands of frequencies, each one a wave at its own scale.

The micro-point acts like a filter and broadcaster. It absorbs certain bands (depending on its molecular structure) and reflects the rest. What bounces back is its spectral fingerprint—the unique balance of frequencies it could not keep.

Plain language: Every point is lit by the whole rainbow. It keeps some colours and bounces others. The bounce is its colour “fingerprint.”

3. From sphere to cone

Reflection does not leave in a line—it radiates. Each micro-point sprays its surviving fingerprint in a sphere of light. Every direction is touched, every angle carries the same encoded pattern.

Toward the lens, that sphere appears as a cone: wide at the glass, narrow at the point. Like an ice-cream cone of colour, it already contains all the frequencies that define the point. The cone is not “built” by the lens; it is given to the lens by nature.

Fig. 2 — Spherical Reflection → Cone at the Lens Fig. 2 — Spherical Reflection → Cone at the Lens micro-point spherical reflection aperture lens accepted region → cone sensor
Fig. 2 — A spherical reflection becomes an RGB-hinted cone at aperture and lens.

4. Zero-phase and preservation

Here lies the heart of the matter: the cone will only be believable if the frequencies within it remain in step.

A true lens is one that preserves equal optical path for all those reflected frequencies. When that happens, the cone lands intact at the sensor or the retina. Colours remain crisp. Geometry feels right. Space feels real.

Disturb the paths, and the phase relations slip. Colours smear into each other. Layers lose their alignment. The image may still be sharp by technical measure, but it no longer breathes. Depth collapses.

Zero-phase: not extra sharpness, but integrity — the cone’s frequencies arriving together so colour and space feel true.

5. Focus and aperture as selectors

Focus is often described as “bringing something into sharpness,” but that is only half the truth. Focus is not destruction but selection. By moving the lens, we decide which distance’s cones will collapse into zero-phase alignment on the sensor. That chosen layer becomes sharp. The others soften, but they do not disappear; they remain truthful in their blur.

Aperture is the volume gate. Wide open, it admits more of the cone, giving stronger separation between layers of depth. Stopped down, it compresses the layers, but they still exist. The world is always a 3D broadcast; the aperture simply decides how much of that broadcast is let through at once.

Fig. 3 — Focus Selects a Slice; Aperture Gates the Volume Fig. 3 — Focus Selects a Slice; Aperture Gates the Volume Front focus (plane before sensor) sensor cone collapses here → blur at sensor Zero-phase at sensor (in focus) sensor cone collapses on sensor → sharp Back focus (plane after sensor) sensor cone collapses after → blur at sensor
Fig. 3 — Focus picks distance; aperture shapes cone volume. Lens blue, sensor neutral.

6. Why light surpasses radar

The contrast with radar or LiDAR is instructive.

  • Radar/LiDAR is active. It sends out a few chosen wavelengths, waits for echoes, and scans sequentially in time. It is narrowband interrogation.
  • White light is passive. It arrives already as a broadband flood, inspecting every micro-point with thousands of concurrent wavelengths. Each point instantly reflects its fingerprint in full.

Where radar interrogates in slices, light reveals in parallel. Each cone is not a single tone but a whole orchestra at once. Our eyes and cameras reduce this to RGB, but the scene itself begins as a massive broadband, all-at-once measurement.

7. Human perception — walking inside the volume

Even with one eye closed, we can feel depth. Each cone already carries spatial cues: relative size, parallax as we move, focus blur. With two eyes, every micro-point broadcasts to two slightly different cones, angled across our skull. The brain fuses them into stereopsis, but depth was already latent in the single cone.

The pigeon multiplies this further by bobbing its head, stacking cone upon cone in real time, building a denser model of space. Human photographers do the same without noticing: by shifting slightly before pressing the shutter, they let their intuition read the scene in layered cones.

Once seen, hard to unsee: after you’ve felt true spatial coherence, “sharp but flat” images feel diagrammatic.

8. The epiphany

Once you have seen images where zero-phase holds, the difference is unforgettable. Ordinary images may be sharp, but they lack integrity. They look like diagrams, not worlds.

The first time you see a lens that truly preserves cones intact—like the Hexanon 40 mm—you realise why the word crisp feels different from sharp. Crispness is not edge contrast; it is colour integrity plus spatial coherence. It is the feeling that every point broadcasted, and the lens respected the broadcast.

This is why certain lenses feel alive. They do not invent depth or manufacture colour; they simply let the cones land as they were meant to.

From Rays to Cones — why this is different →

Diffraction — edges as broadcasters →

9. Closing mantra

“Every point broadcasts; the lens preserves; focus selects.”

What We Mean by Spatial Depth

When we talk about spatial or spatial depth, we mean the literal, physical distances between objects in a scene — measured in meters — and whether a lens preserves those relationships truthfully.

Imagine sitting in a café, looking through the window at a statue with trees behind it. The statue might be 10 meters away, and the trees 20 meters away. These distances are fixed facts — they don’t change if the light changes, the weather changes, or the sun goes behind a cloud. Light intensity changes can alter your camera’s exposure time or aperture, but spatial distances remain the same.

A truthful lens shows those distances exactly as your eyes perceive them — the statue and trees keep their real relative positions, just as if you were looking through a clear pane of glass.

An untruthful lens can compress or stretch the scene. Foreground and background objects may appear unnaturally close together, or oddly far apart. The result can still look like a nice photograph, but it no longer matches the true geometry of the view.

When we say “the lens doesn’t lie”, we mean that the photograph’s internal spatial relationships are faithful to reality — the image preserves the real-world architecture of space itself.

── Truthful   ~~ Distorted   ── Flatliner 😵

Truthful vs Distorted lens: café → statue (10 m) → trees (20 m)
Truthful Distorted Accurate geometry vs mild compression
Flatliner 😵 — Extreme Depth Compression
Flatliner 😵 Extreme depth compression — almost one plane
Exaggerated for clarity, even boxes and texts look bad

⚠️ Warning: What follows may shake your beliefs.
We measure distance, not pixels. We read geometry, not sharpness.
A lens can be a truthful ruler — or a trickster.

Isn’t Zero-Phase Impossible?

In the time domain, yes. A true zero-phase filter would have to “see the future,” which causality forbids. That’s why engineers say it cannot exist. But photography is not about time; it is about space. Trees, cafés, and buildings do not change their geometry during the exposure. The whole field is already present. There is no causality trap.

In fact, taking a photo is a kind of distance measurement. The camera records how far things are from one another — the café table from the tree, the roofline from the sky, the window from the doorframe. If the lens preserves cone symmetry, these distances remain true. If it bends them unevenly, the measurement is corrupted.

A Double Gauss lens that preserves the symmetry of light cones does not distort relative spatial relationships. That is “zero-phase” in the spatial sense: not “no delay in glass,” but no hidden re-timing of geometry. The world passes through intact.

Humorous take on measuring space
“It is true, they don't move!”

The camera arrives late

Nature has already selected the scene before the photograph begins. The objects in the frame are not arbitrary appearances. They are surviving solutions.

A leaf, a tree, a stone, a shadow, or a body exists in that form because other forms could not remain under gravity, chemistry, growth, light, energy, and time. The visible world is what remains after impossibility has been removed.

Zero-phase in time is impossible.
Zero-phase in space is natural.

In signal processing, “zero-phase” filtering is non-causal: the filter needs both past and future samples to cancel phase lag. That’s feasible in software (buffering, forward-backward filtering), but impossible for real-time time-domain systems.

Photography lives in the spatial domain. During exposure the scene is already present: a static field of distances and directions. A photograph is therefore a distance-preserving measurement of the scene’s geometry. The lens is the measuring instrument (the “ruler”). If it preserves the symmetry of light-cone propagation, the measurement is faithful. If it introduces frequency-dependent phase warp, the distances are silently re-scaled or tilted.

This time-vs-space confusion is not theoretical for me. In road profilometry, I initially brought time-domain reflexes to a spatial problem. I thought in terms of sequences and delay lines; but the asphalt profile is already there. Once I treated the surface as a simultaneous spatial object — a shape to be measured, not a signal to be waited on — the method clarified: guard symmetry, preserve phase, and the geometry stays honest. Lenses obey the same logic.

So when we say a Double Gauss behaves as a zero-phase spatial filter, we mean it preserves relative phase across spatial frequencies — i.e., it keeps distances and angular relations consistent — up to a trivial bulk delay through glass. It’s not “no time delay”; it’s no differential distortion of spatial measurement . That’s why the images feel truthful.

Why Phase Alignment Matters

Tiny shifts in how different colours of light travel through glass are almost invisible at a single point. Across the whole image, those tiny shifts add up into a stable distortion of space. When the lens keeps all wavelengths in step—zero-phase—the geometry remains true and the depth intact.

Phase Alignment vs Phase Shift (schematic — with Armstrong spirit)
Aligned vs Shifted — Armstrong spirit, same physics Aligned Shifted Image plane Zero-phase: edges coincide, depth reads true. Phase-shifted: edges separate, geometry drifts.

Schematic, not to scale — the astronaut is only there to make you smile while the physics does its work.

“Pop, Depth, in lenses (Spatial presence) is not about sharpness —
it is about phase alignment of light through the lens.”

Why Phase Alignment Matters

A lens does not always tell the truth. Colours that enter together may not arrive together. The shift is so tiny at a single point that it is invisible, yet as it repeats across the image, space itself bends.

“That’s one small step for one pixel, one giant leap for all pixels.”

— If Neil Armstrong had been a lens

What does that mean? Red, green, and blue seldom walk in perfect lockstep. The image may look sharp, but its geometry drifts. A red edge slips from its green companion; objects lean without us noticing. Because the drift is constant, we accept it as real — but it isn’t the geometry of the world.

There is another way. Some lenses keep the colours in phase. No wavelength races ahead, none falls behind. We call this rare condition zero-phase. When light is kept in step, the scene does not bend. Depth reads true. The photograph is not a distortion but a faithful map of space. This is why we say: Pop, depth, and spatial presence in lenses are not about sharpness — they are about phase alignment of light through the lens.

And the design that comes closest? The Double Gauss. Balanced, symmetrical, and strangely timeless, it keeps the colours nearly aligned. This is why certain 40 mm and 85 mm lenses feel uncanny: they preserve the order of light itself.

In practical terms: this explains why some lenses produce the famous “3D pop” or spatial rendering photographers talk about. It comes not from extra sharpness, but from preserving the phase alignment of light.

Why Double Gauss?

Because it does not invent a world — it preserves one.

The Cones of Truth: How Light Really Enters a Lens

A cone is the true messenger of light. It begins at the smallest imaginable point on the world — the tip of an eyelash, the corner of a windowpane, the glint on a dewdrop. From that point, light does not leave as a single line, though countless textbooks still draw it so. Instead, it radiates outward in all directions. A lens receives not one line, but a cone — a spread of information carrying the fullness of that point’s relation to space.

A cone is not alone. Every tiny point in the scene generates its own cone. An eyelash may be no wider than a hair, yet along its curve thousands of points scatter thousands of cones. A face reflects millions. A street, a tree, a cloud — each bursts into its own constellation of cones. They pour into the lens together, overlapping in perfect parallel, a flood of ordered geometry. The world is not given in rays. The world is given in cones.

A cone meets the aperture. The hole shrinks it, narrowing the angle, but it remains a cone — still loyal to its origin point. Whether the aperture is wide or tight, the cone never loses the truth of where it came from. A cone is a contract: this point, from here, through this spread. The aperture may thin the cone, yet it cannot erase its identity.

A cone passes through glass. Each element bends its spread, trying to gather the cone’s scattered lines into a single collapse at the image plane. If the design is faithful, the cone rejoins itself at one pixel, holding all its light in agreement. If the design falters, the cone splits — red drifting here, blue drifting there, green hovering in between. The point loses its place. The house walks, the eyelash shifts, the stars wander. The cone betrayed is reality betrayed.

Millions of cones pour in together. They do not wait in line; they do not collide. They coexist in harmony, each carrying its own point’s identity. The genius of optics is that glass can receive this impossible flood and still, in the best of cases, deliver it back intact. A photograph is nothing more than this miracle repeated for every point: cone in, cone out.

Millions of cones. Millions of negotiations happening in glass, all at once. Each cone must know where to land; each cone must avoid colliding with its neighbors. And the astonishing fact is that all of this computation happens silently, invisibly, and at the speed of light. A lens does the work of infinity in an instant, before our eyes are even aware of what has occurred.

And in a true zero-phase lens, the miracle extends further. Even the colours within each cone — the reds, the greens, the blues — arrive in agreement. They do not split or quarrel, they do not send the image drifting forward in one hue and backward in another. The cones hold steady, anchored in one shared geometry. Nothing drifts, nothing wobbles.

A cone, once begun at a point in the world, returns as a point on the sensor. And when all cones remain aligned, the world holds still.

Cones of light passing through an aperture and collapsing to sensor pixels A point on the object emits a cone of rays. The aperture narrows that cone. The lens system guides it to collapse into a single point on the sensor. Multiple points emit multiple cones in parallel. Point(s) in world Aperture Sensor Cone narrowed by aperture Sensor plane primary point pixel
A point in the world emits a cone of directions. The aperture narrows the cone’s spread; the lens guides it to collapse into one pixel on the sensor. Multiple points emit multiple cones in parallel (faint cones shown), each collapsing to its own pixel.

Conclusion: The Unseen Flood

Every photograph is not a flat capture, but a negotiation of cones. Every point on every object sends its cone; every cone narrows through the aperture; every cone bends through the glass; every cone collapses to a pixel. Nothing else happens. No magic, no shortcut. Only cones, millions upon millions, preserved or betrayed.

If the cones are kept in agreement, the world in the image is steady and breathable. If the cones are bent into disagreement, the world wobbles. All lens design, all optical truth, reduces to this: do the cones agree?

A lens that remembers this — that treats every cone equally, across all colours and angles — is not just sharp, it is honest. It does not invent, it does not betray. It preserves the world as it is, cone by cone, point by point, truth by truth.

Zero-Phase Primer

What it is: photography’s hidden dimension — coherence of light as the true source of spatial depth.

Read the Primer →

Why Phase

Why it matters: the missing chapter in lens history — why coherence was never spoken of, and why it must be now.

Read the Missing Chapter →

Mechanics of Light

Explore the cones in action — diagrams and step-by-step explanations of light’s geometry.

Explore the Mechanics of Light →

Double Gauss Zero-Phase Theory

Symmetry matters. Around the aperture the Double Gauss layout mirrors itself, which can cancel phase errors and preserve angular relationships— zero-phase behavior.

The Double Gauss isn’t just a lens design — it’s an accidental triumph of physical symmetry. When light passes through its mirrored architecture, something remarkable happens: spatial coherence survives. Unlike modern optics that sculpt sharpness through intervention, the Double Gauss lets light arrive nearly unchanged in phase. The result is not clinical perfection, but perceptual trust — images where objects breathe, shadows fall honestly, and depth holds together as if the space still exists inside the photograph.

This essay proposes that the Double Gauss behaves like a passive zero-phase spatial filter, echoing the structure of IIR lattice filters in signal processing. Drawing parallels between optics and digital signal theory, it explains why some lenses don’t just resolve detail, but resolve space — and why our eyes relax when coherence is preserved.

It’s not nostalgia. It’s structure. The pop, the presence, the sense of reality — they’re not mystique. They’re phase behavior.

Foreword by ChatGPT‑5, August 2025

The Double Gauss as a Zero-Phase Spatial Filter: A Signal-Theoretic Reappraisal of Vintage Lens Fidelity

Abstract

We propose a reinterpretation of the classic Double Gauss lens as a passive, phase-preserving spatial filter. Drawing from signal processing — particularly the behavior of zero-phase IIR lattice filters — we argue that the Double Gauss’s symmetrical geometry and minimal element count unintentionally preserve wavefront coherence. This spatial fidelity, we suggest, is the technical origin of the ‘3D pop’ and presence observed in images from vintage Double Gauss lenses.

Unlike modern designs that prioritize sharpness via complex corrections, the Double Gauss transmits light with minimal disruption to its spatial structure. We support this hypothesis with optical analysis, DSP analogies, and experiential photographic evidence.

1. Introduction

The human eye does not simply perceive light—it reconstructs space. The spatial relationships, depth, and tactile realism we perceive in photographic images are not solely functions of sharpness or resolution. Rather, they depend on the integrity of the light's structure as it travels from subject to image plane.

In lens design, this spatial integrity is easily disrupted by internal reflections, excessive element counts, or aggressive aberration correction. Among the most enduring and respected designs in photographic history is the Double Gauss. From its early implementation in the late 1800s to its evolution into classic Planars, Xenons, and Hexanons, the Double Gauss has been favored for its simplicity, balance, and distinctive rendering. It remains popular in vintage photographic circles, often praised for its “pop,” “depth,” and “natural presence.” These qualities are frequently attributed to subjective factors: “vintage glass feel,” “film softness,” or “lens magic.”

This paper proposes a more technical interpretation rooted in signal theory: the Double Gauss behaves, in effect, as a zero-phase spatial filter. Though the original designers lacked the mathematical language of phase response or wavefront coherence, the geometric and symmetrical properties of the Double Gauss allow it to transmit spatial information with minimal distortion. This behavior mirrors the properties of zero-phase IIR filters used in digital signal processing—filters that modify amplitude but preserve the alignment and coherence of structures within the signal.

Through this analogy, we suggest that the Double Gauss's enduring perceptual strengths are not mystical but emergent—arising from its phase-stable transmission of light through space. This paper builds that case across optical history, signal theory, perceptual psychology, and photographic evidence.

2. Background

2.1 Historical Overview of the Double Gauss

The Double Gauss lens design emerged in the late 19th century as a modification of existing microscope and telescope objectives. First implemented by Alvan Graham Clark in 1888 and later refined by Paul Rudolph at Carl Zeiss in the form of the Zeiss Planar (1896), the Double Gauss architecture is built around a core principle of optical symmetry: two pairs of positive and negative lens elements arranged symmetrically around an aperture stop.

Over the decades, the Double Gauss configuration evolved into countless commercial and scientific lenses. Its derivatives include the Voigtländer Ultron, Schneider Xenon, Leitz Summicron, and Konica Hexanon 40mm f/1.8. Despite minor differences in groupings or cementing, the basic layout remains: a low-element-count lens with a symmetrical path that offers good correction for spherical and chromatic aberrations, as well as coma and astigmatism.

One of the main reasons for the Double Gauss’s longevity is its versatility. It can be optimized for fast apertures, flat fields, and relatively low production cost. Yet beyond these advantages, photographers and optical observers have consistently praised lenses of this lineage for producing images with a distinctive presence—a sense of realism and depth not always replicated by later, more complex designs.

While many attribute this character to vintage coatings, film response, or subjective nostalgia, the consistent visual traits suggest a structural cause. This paper explores the hypothesis that these traits originate in how the Double Gauss interacts with light at a wavefront level—not just correcting its shape, but preserving its spatial coherence.

2.2 Zero-Phase Filtering in Signal Processing

In digital signal processing (DSP), filters are designed not only to alter the amplitude of frequency components, but also to manage their phase behavior—the alignment of signal components over time or space. A zero-phase filter is one that preserves the position of features within the signal, maintaining their original phase relationships. In practice, this means that such filters may smooth or shape a signal without introducing distortions such as lag, ringing, or temporal smearing.

One specific class of such filters includes zero-phase IIR (infinite impulse response) filters, especially those implemented using double complementary lattice structures. These designs are known for their ability to achieve complex shaping with minimal phase distortion. Unlike FIR (finite impulse response) filters, which often introduce group delay, zero-phase IIR filters pass signals in a phase-locked fashion—modifying energy distribution without displacing structure.

In imaging terms, this is analogous to allowing light to undergo contrast or tonal modification while preserving its geometric origin—its shape in space. In the visual domain, this can mean that edges, curvature, depth transitions, and texture gradients remain intact, enabling the brain to reconstruct the correct spatial context of the scene.

This paper draws an analogy between such signal processing structures and the behavior of the Double Gauss lens. Just as a lattice filter maintains the integrity of temporal or spectral alignment, the Double Gauss appears to maintain the spatial phase coherence of light, allowing a scene’s true depth and form to pass through without interruption. The resulting images feel not just sharp, but real.

3. Optical Phase Integrity and Spatial Information

Photography is fundamentally a translation of light geometry into image geometry. Every point of light that enters a lens carries more than intensity—it carries direction, curvature, and phase relationships with neighboring rays. These spatial patterns are what allow the human visual system to perceive depth, volume, and physical presence.

While lenses are typically described in terms of their sharpness, resolution, and aberration correction, such metrics fail to capture how well a lens preserves the structural coherence of the incoming light field. A lens may render a scene with high edge definition yet still lose the subtle transitions and spatial "air" that allow a scene to feel real.

In wave-optical terms, this coherence is expressed as the phase alignment of the wavefront: a faithful lens allows the spatial waveform to pass through largely intact. Disruption to this wavefront—via chromatic dispersion, internal reflections, or excessive correction—can destroy the fine spatial transitions that give form its dimensional feel.

3.1 Spatial Information in Human Vision

The human brain reconstructs space from a combination of visual cues:

  • Edge gradients
  • Parallax and occlusion
  • Contrast transitions
  • Shadow geometry
  • And most importantly: relative phase coherence of light variations across space

While this phase coherence is not consciously perceived, it is essential to our spatial cognition. The visual cortex performs a kind of analog spatial Fourier synthesis, reconstructing perceived depth from distributed cues. Even in black-and-white images, this coherence is often preserved when the lens system respects the shape of the light.

3.2 Where Modern Lenses Fall Short

Modern photographic lenses often feature:

  • 12 to 20 optical elements
  • Multiple aspherical surfaces
  • High-refractive-index glass
  • Complex cemented doublets
  • Extensive digital correction profiles

These elements are designed to optimize resolution, minimize distortion, and deliver perfect MTF curves across the frame. But they do so by intervening in the light path to an extreme degree. Each glass interface, cemented surface, or coating modifies the phase and trajectory of light in small ways. Cumulatively, these interventions can result in phase scrambling—the destruction of the fine spatial alignment that once existed between different regions of the wavefront.

This leads to an effect commonly described by photographers as "clinical rendering": scenes are technically sharp but feel flat, detached, or "dead." Depth is rendered as blur transitions rather than embedded spatial relationships.

3.3 The Double Gauss as a Passive Phase-Preserving System

In contrast, the Double Gauss architecture:

  • Contains few elements (often 4–6)
  • Employs symmetry to cancel second-order distortions
  • Offers large air gaps between groups, minimizing intra-glass interference
  • Avoids excessive correction that might disturb light’s native geometry

This means that incoming light rays—though refracted—arrive at the image plane with their spatial coherence largely intact. The lens acts as a passive transmission medium, not a restructuring system.

The result is not just a pleasant image, but a spatially faithful one. Scenes rendered through a Double Gauss-type lens retain:

  • Curvature in skin tones
  • Volumetric transitions in cloth or foliage
  • A sense of atmosphere and breathing room

These are precisely the characteristics that give rise to what photographers call "pop" or "presence"—and they are a direct consequence of the lens’s minimal disruption of spatial phase.

4. The Accidental Zero-Phase Behavior of the Double Gauss

The Double Gauss lens design, as it evolved through the 19th and 20th centuries, was not developed with wavefront phase preservation in mind. The language of spatial phase, coherence, and zero-phase filtering did not yet exist in optics or engineering. What did exist was a culture of empirical refinement: optical designers using ray tracing, bench tests, and photographic prints to judge the quality of a lens based on how natural and pleasing the image appeared.

It is within this context that the Double Gauss was refined—not for Fourier-domain performance, but for perceptual integrity. And it is here that we suggest the Double Gauss became, unknowingly, a kind of zero-phase spatial filter.

4.1 Symmetry as Passive Phase Stabilization

At the heart of the Double Gauss is its symmetry: two equal and opposite lens groups, balanced across an aperture stop. This symmetry is not only beneficial for correcting certain aberrations (e.g. coma and distortion), but also incidentally minimizes even-order phase shifts across the optical path.

In signal processing, similar symmetrical structures are used in linear-phase FIR filters or double lattice IIR filters, where mirror symmetry cancels out phase distortion across the spectrum. Likewise, the Double Gauss’s geometry ensures that rays entering the system are refracted equally on both sides of the aperture, reducing asymmetry in optical path lengths and angular deviations. This passive balance preserves the shape of the light field — not just its intensity.

4.2 Air Spacing and Minimal Interfaces

Most Double Gauss lenses contain 4 to 6 elements in 4 to 5 groups. This low element count means there are fewer glass-air transitions, and more importantly, fewer cemented surfaces that can introduce phase-scrambling due to refractive index mismatches or internal reflections.

Air-spaced groups maintain natural divergence of light rays, which allows spatial relationships to remain distinct as they pass through the lens system. The lack of aggressive flattening or correction ensures that natural field curvature and depth transitions are preserved — not flattened or sterilized.

4.3 Unintentional Coherence Preservation

At no point in its history did the Double Gauss require an understanding of optical phase or wavefront integrity to be effective. Its success came from its image results: pictures that looked real, dimensional, and emotionally faithful. This success led to widespread adoption across film cameras, projection optics, and even scientific instrumentation.

But it is precisely this unintended preservation of spatial coherence that explains why so many photographers—both amateur and professional—still seek out vintage Double Gauss lenses today. They may lack the technical vocabulary to explain what they’re seeing, but they recognize its presence. The lens did not “enhance” the image — it simply let the light arrive intact.

4.4 A Case of Emergent Optical Behavior

What we see in the Double Gauss is a case of emergent behavior: a physical system producing results that exceed the intention of its design. In the same way that a basic IIR lattice filter can produce zero-phase output under symmetry, the Double Gauss behaves as a spatial coherence pass-through — despite being conceived long before such concepts were formalized.

Its “character” is not a flaw. It is the artifact of wavefront faithfulness.

5. Experimental and Perceptual Evidence

While the zero-phase behavior of the Double Gauss can be supported theoretically through symmetry and wavefront considerations, its most compelling validation lies in the images themselves. Photographers over many decades — without mathematical models or wavefront simulators — have intuitively favored Double Gauss derivatives for their rendering of depth, presence, and natural form.

In this section, we present experiential and photographic evidence that supports the central thesis: that the Double Gauss allows spatial information to be preserved — perceptually and physically — in a way that modern, over-corrected lenses often do not.

5.1 Presence Without Blur: What Photographers Call “Pop”

The term “3D pop” is widely used in photography forums, especially among users of vintage lenses. It refers to a subjective sense that the subject is “stepping out” of the image — not via shallow depth of field or artificial isolation, but through a realistic spatial separation.

When examining side-by-side images of the same scene shot with a Double Gauss lens (e.g., Konica Hexanon AR 40mm f/1.8) and a modern multi-element lens (e.g., Nikon AF-S 50mm f/1.4G), we observe the following:

  • The Double Gauss image shows smoother tonal transitions, natural subject-background layering, and a feeling of dimensional integrity — even at f/8 or f/11.
  • The modern lens image is sharper in a technical sense, but appears flatter, with less separation between planes. It often has higher edge contrast but less internal shape within the subject.

Photographically, this distinction is not subtle. It is consistently observed across portraits, street photography, and still life — especially in natural light, where halation, atmospheric haze, and shadow rolloff all contribute to spatial realism.

5.2 Halation and Atmospheric Information

Another observation among vintage lens users is that older lenses often exhibit light bloom, halation, or soft edge glow — especially around bright subjects. While commonly dismissed as aberration, these effects may actually represent a partial phase retention artifact: they preserve how light disperses in space, rather than flattening it into surgical contrast boundaries.

In effect, these lenses do not clean up the scene — they let the light speak. The viewer's brain receives incomplete but coherent spatial data, which it resolves into depth. The modern lens, by contrast, may remove these cues in pursuit of clarity, ironically making the image feel less spatially alive.

5.3 User Experience: Intuitive Phase Recognition

Photographers often describe Double Gauss-derived lenses with phrases like:

  • “It feels like I’m there.”
  • “It breathes.”
  • “The subject belongs in its environment.”

These subjective reports align precisely with the preservation of wavefront structure — not through blur or DOF, but through microgeometry and spatial fidelity. Photographers with backgrounds in audio or DSP (including the author) have often noted that Double Gauss lenses "look the way zero-phase filters sound" — a smooth, accurate, and phase-aligned reproduction that feels emotionally right.

This correlation is unlikely to be coincidental. It suggests that human spatial intelligence is sensitive to analog phase behavior, even if the observer lacks explicit vocabulary to describe it.

5.4 Case Study: Konica Hexanon AR 40mm f/1.8

This specific lens — a six-element Double Gauss variant — demonstrates extreme perceptual depth despite being tiny, inexpensive, and technically underwhelming by modern sharpness metrics. Field tests show:

  • Strong spatial presence at f/4–f/11
  • Immediate subject-background separation even in flat light
  • No artificial bokeh or blur required for “pop”
  • Depth relationships that remain consistent when printed, viewed digitally, or reduced in resolution

These traits would be difficult to explain purely in terms of MTF or aberration correction. They are better understood as artifacts of spatial phase preservation across the optical path.

6. Implications for Modern Optical Design

The hypothesis that the Double Gauss acts as a passive zero-phase spatial filter carries significant implications for the way we design, evaluate, and perceive photographic optics today. If spatial phase coherence — rather than resolution alone — is key to depth perception and “realness” in images, then many modern lens designs, despite their technical sophistication, may be fundamentally misaligned with human visual intelligence.

This section proposes how the industry might rethink lens design philosophy, material selection, sensor interaction, and user-centered evaluation — not toward sharpness, but toward preserved spatial structure.

6.1 Perception-Aware Optical Engineering

Current design paradigms emphasize:

  • High element count for aberration correction
  • Aspherical elements for field flattening
  • Extensive coatings to suppress flare
  • Corner-to-corner resolution maximization for large sensors

While these improvements have merit for tasks like astrophotography, product imaging, or reproduction work, they may be counterproductive in general-purpose, human-centric photography, where dimensionality, atmosphere, and relational space matter more than flat-field detail.

A lens that is perfect in MTF may still be perceptually dead.

Instead, a new approach might optimize for:

  • Symmetry and balance in the light path
  • Moderate correction that avoids overprocessing the wavefront
  • Minimized glass interfaces to retain phase integrity
  • Air-spaced element groups for microstructure preservation

This requires shifting the goal from maximum measurable sharpness to maximum perceptual coherence.

6.2 Revisiting Historical Designs

Legacy lenses — particularly those derived from Double Gauss, Tessar, and Sonnar formulas — may offer a goldmine of spatial-preserving behavior that modern tools have overlooked or undervalued. These designs were often limited by glass types and mechanical constraints, yet produced images of lasting emotional resonance.

Rather than trying to replicate their “look” through software simulation or post-processing, designers might instead:

  • Evaluate their spatial fidelity on real-world scenes
  • Hybridize classic architecture with modern coatings or tolerances

This could lead to a new generation of spatially faithful optics — modern in build, but historical in light handling.

6.3 Sensor & Software Interaction

Modern digital sensors add their own challenges:

  • Microlens arrays introduce angular phase distortion
  • Stacked sensor designs (BSI, PDAF layers) further scramble coherence
  • Software correction (distortion maps, CA reduction) often applies local transformations that reduce phase fidelity

An optical system that preserves spatial coherence at the lens level may still lose it in the signal pipeline.

Thus, this theory suggests a broader design ethos:

  • Sensor and optics should be co-designed not just for resolution, but for spatial wavefront preservation
  • Future raw converters could offer phase-aware demosaicing, preserving local coherence cues during processing
  • Rendering engines might optimize for perceptual dimensionality rather than just colorimetric accuracy

6.4 A New Metric for Lenses?

We currently rate lenses by:

  • Sharpness
  • Distortion
  • Vignetting
  • CA
  • Bokeh

Perhaps we need a sixth dimension:

Spatial Fidelity Index (SFI): how well a lens preserves the phase structure of a real scene

Such a metric would be hard to reduce to a single number — but easy to sense, in a side-by-side comparison. It is the difference between seeing a photo, and feeling it.

7. Conclusion

The Double Gauss lens design, though conceived in the analog age of ray tracing and empirical refinement, reveals itself today as something far more than an artifact of optical history. When viewed through the lens of signal theory — particularly the concept of zero-phase spatial filtering — we find that this lens family exhibits properties that preserve not just sharpness or contrast, but the very structure of space carried by light.

Its symmetry, low element count, and minimal phase disturbance allow the Double Gauss to transmit wavefronts with unusually high spatial coherence. The result is an image that feels not merely sharp, but real — one in which subjects appear naturally embedded in their environments, rather than cut out and pasted onto background blur.

This paper has proposed that the Double Gauss functions as an accidental zero-phase spatial filter — an optical topology that emerged not from a theoretical understanding of wavefront behavior, but from careful observation and design intuition. Early lensmakers did not know the language of phase alignment or lattice filters, yet they built an architecture that behaves as if they did. It is a case of emergent fidelity: a form that preserved something vital before it could be named.

In contrast, modern lenses — while technically advanced — often sacrifice this coherence in pursuit of correction. The pursuit of MTF perfection has, in many cases, come at the cost of perceptual depth. Flatness, clinical rendering, and a lack of “soul” are often the unintended consequences of wavefront overprocessing.

As photography enters an era dominated by computational imaging and software-defined optics, this insight matters more than ever. It reminds us that light carries structure, and that our imaging tools must respect that structure — not merely shape it to specification. By revisiting designs like the Double Gauss, and understanding their underlying behavior, we may find a path toward a new generation of perceptually faithful optics — ones that align with the brain as much as with the sensor.

In the end, what the Double Gauss teaches us is this:
Light remembers where it came from.
The best lenses are those that let it arrive intact.

7.1 Spatial Constancy: A Note from the Road

From Road Bumps to Optical Truth: A Profilometer’s View of Spatial Constancy

In the design of road profilometers, accuracy isn’t just about resolution — it’s about repeatable spatial measurement, regardless of the vehicle’s speed or motion. When I built a system that could scan the road surface at 90 km/h using a laser triangulation sensor and an accelerometer for movement compensation, the goal was simple:

The bump in the road should appear in the same place every time.

We sampled the road every 5 centimeters. Whether the car moved fast or slow, the measurement was spatially grounded — it didn’t care about time or speed. The profilometer separated itself from the vehicle’s chaos, anchoring itself instead to the world’s shape.

That experience left a deep impression. It became a metaphor for what I now wish from optical systems:

  • That a lens should act like a profilometer.
  • That it should measure space, not time.
  • That it should render the object as it truly is — indifferent to internal distortion, motion, or dispersion.

This is why I sometimes imagine a world where all colors of light travel at the same speed — even through glass. Not because that’s physically accurate, but because it reflects a desire for robustness in measurement. For the lens to deliver what the object emits, not what the system adds.

In this view, chromatic aberration, phase error, and asymmetry are not just technical imperfections — they are motion artifacts, noise in the measuring system, distortions of truth. And a truly symmetrical, well-corrected lens — like the Double Gauss — becomes more than just an optical formula.

It becomes an instrument of trust.
A spatial profilometer of light.

A glass lens does not know our equations. It does not trace rays or solve wave equations. It bends light not by theory, but by nature — silently, perfectly, and without permission.

Whether we model it with rays, waves, or quantum fields, the lens does not care. It only asks that light pass through, and in doing so, reveals its truth.

Our mathematics are maps — precise, powerful, beautiful — but still only shadows.
The lens is the terrain.

And in the end, it is we who are fortunate — that glass obeys laws we did not invent, and forms images we barely deserve to see.

Phenomenology of Spatial Light

This section is the conceptual foundation for everything else on the page. It introduces the idea that some lenses preserve the relationships in light (phase/coherence), which is why certain photos feel dimensional and breathable. Light isn’t just brightness or colour; it carries spatial structure—angles, wavefronts, coherence. We sense space through light before we name what we’re seeing.

Foreword to Phenomenology of Spatial Light We do not merely see — we perceive through light.

Before objects are named, before color is recognized, light reaches us — already shaped by the world it touched. It carries structure, coherence, and direction — a silent geometry that breathes space into vision.

Some lenses preserve this geometry. Some images breathe it.

This section explores how light conveys presence — how certain tools, especially vintage optics, reveal not just what we see, but how space reveals itself through light.

We begin with light. And light begins with space.

Foreword by ChatGPT-5, in dialogue with Kenneth Blake August 2025 – Written by light, through words

Do we need to see to believe—or believe to see?

Toward a Phenomenology of Spatial Light: Rediscovering Depth, Memory, and Presence in the Geometry of Vision

A meditation on photography, perception, and the unseen geometry of vision

We do not merely see with our eyes.
We perceive through light.
Before there is recognition, there is revelation.

When light from the world reaches us, it is already encoded with spatial meaning. This meaning does not arise after color is recognized or objects identified; rather, it precedes them. It lives in the fine structure of the light itself—in direction, wavefront curvature, coherence, phase variation, and microcontrast. We sense space through light long before we comprehend what we are seeing.

Modern photography has largely reduced light to color and resolution—measurable, displayable, flattened. But in the analog era of film, and through certain vintage lenses even today, there remains a trace of something deeper: spatial information not explained by sharpness alone. It is this unseen fourth element, this “S” in RGBS, that gives some images their uncanny three-dimensionality—the sense that they breathe, that they extend, that they live.

1. Light as Carrier of Spatial Information

Seeing begins before we know what we are seeing.

Photography is often described as capturing light—but this phrase is misleadingly simple. What the camera receives is not just brightness or hue, but a full geometric wavefront—a precise dance of angles, intensities, directions, and subtle phase behaviors. Light is a physical phenomenon, and every beam entering a lens carries encoded information about its origin: where it came from, how far it has traveled, what surface it bounced from, and how it was shaped by the medium it passed through.

Imagine standing before a complex scene—a city street, a forest glade, a face in profile. From every infinitesimal point on each surface, photons radiate outward in every direction. When some of those photons enter your camera lens, they don’t merely strike the sensor—they arrive bearing data: distance, texture, angle, gloss, softness, shadow. This light, shaped by space itself, becomes the raw material of the image.

And yet, in our current language of photography, we reduce it all to color and sharpness.

But light is also depth. It is spatial measurement. Just as sound is not only tone but location, light is not only illumination but geometry. If the lens is faithful and the light is allowed to arrive with its shape intact—uncorrupted by flattening filters or over-corrective coatings—then that information is preserved and passed on.

This is why certain lenses, particularly older ones with minimal coatings, seem to “see” differently. They transmit not just a color-accurate image, but a spatially honest one. The image they form is less about perfection and more about presence. You can feel the distance between foreground and background, the tension between surfaces, the air itself holding things in place.

This spatial information is not an extra—it is fundamental. It is light’s native message. And the better we listen to it—not only with cameras, but with our perceptual awareness—the deeper and more real our images become.

2. The Myth of RGB Completeness

Color is not the full story. It is a convenient shorthand for a deeper spectrum of meaning.

Modern digital imaging rests on the model of RGB—red, green, and blue—as the foundational pillars of visual representation. This model reflects the trichromatic structure of the human eye, with its three cone types tuned to approximate peaks around 420 nm (blue), 534 nm (green), and 564 nm (red). This system works well for building a reproducible color image, but it omits everything that cannot be neatly assigned a color—including much of what gives an image its presence.

To believe RGB is complete is to mistake a map for the terrain.

In reality, the visible spectrum is a continuous flow, not a three-peak mountain range. Between the RGB peaks are vast interstitial wavelengths, overlapping wavefronts, and subtle harmonics that interact in ways no simple model can encode. Add to this the infrared and ultraviolet tails—present in daylight, partially captured by some lenses, mostly filtered out by modern sensors—and you begin to realize how much of light’s richness is discarded in digital capture.

And yet, in spite of this reduction, there are images—especially those made with certain vintage lenses—that feel more spatial, more alive, even when technically "limited" to RGB data. How?

Because light doesn't just deliver hue. It delivers phase, texture, glow, and the tiny angular disparities that the eye and brain interpret as depth. These cues often fall between or outside the RGB channels. The lens may transmit them imperfectly—but when it does, the image resonates with something more than color: it breathes with space.

This is why we propose an extended model: RGB + S, where S stands for spatial information—not a color, but a dimension.

It is not a separate channel like infrared or depth map—it is a quality within light, and within the way certain lenses and sensors preserve the relationships between surfaces, light direction, and distance. This is not theory alone—it is felt in the viewing. Certain photos seem to reach out from the screen or page, not because they are sharper, but because they are more true to the original wavefront geometry of the scene.

Thus, the myth is not that RGB is wrong, but that it is enough. It serves reproduction, not presence. It builds accuracy, not intimacy. But when spatial information leaks through—when “S” rides in on the back of light—we see something more: a photograph that occupies space.

3. Black-and-White Photography and the Proof of Pop

Color is not required for spatial depth. Light alone is enough.

If spatial perception depended solely on color, black-and-white photography would be flat. Yet, some of the most compelling, three-dimensional photographs ever made are monochrome. They feel tactile. You can sense the weight of a shoulder, the grit of a street, the air between a face and a wall. This paradox reveals something crucial: spatial information does not require color. It only requires light—and the right kind of attention.

In black-and-white photography, the tonal map becomes paramount. Luminance contrast replaces chromatic contrast. Directional light reveals form not through hue, but through shape-from-shading—the way brightness gradients follow the geometry of surfaces. Edges gain definition not by color separation, but by micro-contrast—the subtle local differences in tone that simulate curvature, proximity, and tension.

A well-crafted monochrome image often reveals more about form and space than a color photograph. Without the distraction of hue, the eye pays more attention to volume, directional shadows, and textural clues. Vintage black-and-white film, with its soft shoulder and continuous silver halide grain structure, preserves this information in a way that digital images often struggle to replicate. Even film grain contributes to pop—adding a kind of spatial noise floor, a living texture that simulates atmospheric depth.

When rendered by the right lens—one that maintains phase integrity and avoids overly “corrective” coatings—the light in a black-and-white image carries all the ingredients needed for pop:

  • Edge contrast without digital sharpening
  • Tonality gradients that match physical contours
  • Shadow falloff that implies distance, not flatness

In fact, many photographers trained their visual sense through monochrome. Not because it is simple, but because it teaches the eye to recognize space without color cues. And what is 3D rendering, if not the ability to sense space on a flat surface?

Your own experiences scanning black-and-white negatives—particularly 120 format film from classic bellows cameras—support this. Despite minimal optical elements, those lenses often yield surprisingly rich spatial depth. Why? Perhaps because they transmit light honestly, without interference. Perhaps because the image is shaped more by light itself than by electronic correction.

So yes: pop exists in black-and-white. In some ways, it is even more honest there. It tells us that spatial information lives in light, not just in color—and that the human brain is remarkably good at sensing it when allowed to.

4. The Lens as Interpreter, Not Translator

Some lenses show you what is there. Others tell you what to see.

We often describe a lens as "transparent," as though it simply passes light unchanged from the world into the camera. But every lens interprets light. The materials, shapes, coatings, and even historical context of its design affect what the final image becomes. The best lenses don’t merely resolve—they reveal. They act less as translators, which convert the world into machine-readable format, and more as interpreters, which preserve nuance, subtlety, and intention.

The Konica Hexanon AR 40mm f/1.8 is such a lens. Despite being compact, inexpensive, and modest on paper, it consistently produces images with unexpected depth and character—particularly at smaller apertures like f/8 or f/16. What accounts for this spatial magic?

The answer may lie in the optical philosophy behind the lens.

The 40mm Hexanon, like many classic designs, is based on the double Gauss structure, known for its balance, low distortion, and organic rendering. But one peculiar aspect stands out: its unique six-element, five-group configuration includes a central fifth element that seems to behave differently—visually and perhaps even in wavefront behavior.

It’s tempting to focus on one peculiar aspect of the Konica Hexanon AR 40mm f/1.8: its unique six-element, five-group configuration includes a central fifth element that seems to behave differently—visually and perhaps even in wavefront behavior. Some might even call it “The Fifth Element”—an apt metaphor for a component that seems to hold the image’s spatial coherence together. Whether intentionally or not, this middle element may act as a zero-phase balancer—retaining phase alignment between wavefronts across different parts of the image field. This subtle preservation of wave shape—not just color or sharpness—could be part of what creates the lens’s uncanny sense of space. Instead of flattening the image for optical perfection, the Hexanon allows light to arrive with its geometry intact.

Coatings also play a role. Modern lenses are heavily multi-coated to suppress flare, improve contrast, and maximize transmission. But in doing so, they may also filter out weak and spatially important light components—the low-level reflections, micro-glows, and subtle interference fringes that give a scene volume. Vintage lenses often have simpler coatings (or even none on internal elements), which means more light phase variation survives the journey to the sensor.

This is why modern lenses can feel sterile: they show everything, but preserve nothing unseen. The image is correct, but it does not breathe. The best vintage lenses do not merely resolve—they encode presence. They leave room for light to tell its own story.

So, when you focus manually with your Hexanon or Tamron Adaptall, you are not just dialing in sharpness. You are listening to what the lens has to say. You are letting the interpreter speak—not as a technician, but as a witness to light’s geometry.

5. Cinema, Projection, and the Light That Touches You

When film breathes light into space, we don’t just watch—we enter.

There is something unmistakable about the feeling of watching a film projected in a cinema. Even in an age of ultra-sharp home displays and 80-inch OLED televisions, the experience of the cinema often feels more real, more spatial, more immersive. But why?

The answer lies not only in the scale of the screen, but in how the light reaches you.

In the classic era of cinema, light physically passed through the emulsion of film, carrying with it the encoded geometry of the scene. This is analog light—not interpreted, not rebuilt from data, but shaped by contact with the negative. It strikes the screen with its structure intact, and then bounces into your eye, still carrying hints of the original spatial wavefronts that danced through the lens decades before.

Digital displays, for all their brilliance, reconstruct light from pixel maps. They emit light from backlit grids, edge-lit surfaces, or micro-LEDs. But they do not pass light through an image the way a projector does. They synthesize the image. They simulate glow. But they do not glow.

This difference matters. Film projection preserves depth by continuity—the analog, wave-based continuity of the light itself. It allows tiny irregularities in contrast, texture, and halation to reach the viewer, reinforcing a feeling of physical presence. This is why old black-and-white films, even when viewed today, can feel startlingly dimensional despite their limitations in resolution or dynamic range. They are spatial documents, not digital reconstructions.

Moreover, the lenses used to film those movies often lacked extreme coatings or flattening corrections. They transmitted the scene with all its light-borne imperfections—glow, flare, local contrast quirks—preserving the air between objects. On film, these imperfections were etched as part of the medium itself. When projected, the audience was bathed in that preserved light—not just seeing actors, but seeing the geometry of the moment.

It’s no surprise, then, that watching a well-shot film print can feel more emotionally present than watching the same movie digitally. The difference is not only resolution—it is spatial fidelity. The light that touched the actor’s face, that bounced off the wall of the set, that passed through the vintage lens, is the same light that reaches your eyes in the theater—echoed, but not erased.

When we say “the image reaches out from the screen,” we are not speaking metaphorically. It actually does—through depth cues, coherent light patterns, and preserved phase relationships that our visual system interprets as presence.

Cinema is spatial not just in form, but in function. It invites us not to watch—but to enter.

6. Phenomenology of Perception

We do not see what is. We see what is revealed.

Before the brain names an object, before it identifies color or classifies distance, it perceives. This perception is not passive. It is active interpretation—a pre-linguistic awareness, spatial and fluid. Phenomenology, especially in the work of Maurice Merleau-Ponty, emphasizes that perception is not a mechanical decoding of input but a living, bodily engagement with the world. You do not see space from the outside—you are inside it, embodied.

When you view a photograph that exhibits true depth, something resonates deeper than recognition. Your mind processes subtle cues—shadow direction, gradients of sharpness, luminance contours, atmospheric haze—and constructs spatial understanding. It happens before you know what you’re looking at. This is pre-object perception—the stage before naming, categorizing, or describing. It’s where 3D “pop” lives.

This perception is immediate, intimate. It mirrors the way a musician hears a phrase of music not as separate notes but as movement, or how a native speaker hears meaning before grammar. Similarly, a trained photographic eye reads spatial structure before it sees the subject. We experience volume, pressure, relational distance—a tension of nearness and farness—before we articulate what it is we are looking at.

When this pre-conscious spatial reading is strong, we often call it “realism” or “presence.” But those words are incomplete. What we’re truly feeling is the phenomenological weight of space—the sense that the image has not been flattened or translated, but instead holds the tension of the original light-geometry. It’s not about reproduction. It’s about communion.

Certain lenses and rendering conditions preserve this phenomenological depth. Others override it with perfection—flattening, equalizing, “correcting” the quirks that the body and brain actually rely on to make sense of space.

Phenomenology helps us reclaim seeing as a layered event, not just an input stream. Photography—especially with vintage tools—becomes an act of reverent witnessing, not only of what is seen but of how it reveals itself.

And so, as photographers, we learn not just to look—but to attend. We begin to understand that some images feel more “real” not because they show us more, but because they let us be in the space they show. They allow seeing before knowing. They deliver presence before proof.

7. Aberration as Memory

The Lens as a Portal to Pre-Cognitive Light

In modern optical design, aberration is treated as a flaw. Ghosting, coma, halation, chromatic edges — all are seen as defects to be engineered away, minimized, or buried beneath stacks of corrective glass. The goal is perfection. Neutrality. Clean, sharp lines and obedient light.

But what if this is a misunderstanding?

What if these so-called flaws are not distortions introduced by the lens, but residual fragments of spatial truth—traces of how light behaves before the brain reshapes it into something the mind can comfortably see?

Our visual system is not passive. It is a ruthless editor. The brain builds a stable world: flat planes, consistent forms, no distracting color spill or directional smearing. It filters light down to what serves orientation and survival. The soft veils of halation, the subtle fringe of color at high-contrast edges, the directional shimmer of off-axis glow — these are trimmed away before awareness. Not because they aren’t there, but because they don’t serve the daily story.

Photographic lenses, especially older ones with fewer corrective elements, don’t edit this way. They allow spillage — optical residue that slips past design conventions. Aberrations, in this view, are not errors, but memories:

  • Spherical aberration becomes the glow of light before mental flattening.
  • Coma records directional overflow, like movement caught in stillness.
  • Chromatic fringes may speak to depth separation between wavelengths — a depth our eyes see but never name.

In this sense, vintage lenses are portals to light before cognition. They let in fragments of a pre-linguistic visual field — aspects of reality too complex, too nuanced, or too redundant for the brain to retain. The sensor sees what the mind would forget.

This may explain why photographs made with aberration-prone lenses often feel more emotional, haunted, or dimensional. They carry not just the image, but the afterimage — a memory of space, of shimmer, of atmospheric complexity. What we call "imperfection" might actually be the ghost of presence.

So perhaps aberration is not the failure of optics.
Perhaps it is the residue of light’s full intention.

And the proof is in the pudding: modern lenses, in removing every visible anomaly, often erase the very residues that made space feel real. Clean? Yes. But spatially silent.

8. Conclusion: Toward Seeing Beyond

The photograph is not a flat artifact—it is a vessel of presence.

What we call a “photograph” is more than an image. It is a spatial imprint of a moment, carried by light and modulated by the instrument we choose to witness with. The deeper we look, the more we find that photography is not merely the act of freezing time—it is the act of honoring the shape of light in space.

As we’ve seen, the traditional RGB model of vision—while useful—does not account for the fullness of perception. Between and beyond the red, green, and blue peaks lies another dimension: S, the spatial quality of light. It is not color. It is not sharpness. It is depth-before-definition, felt more than seen.

This spatial information survives in images when it is allowed to flow through a lens unspoiled—when coatings do not suppress it, when the wavefronts remain intact, when contrast gradients and flare aren’t aggressively filtered out in post. Vintage lenses, imperfect by modern standards, often preserve this integrity of space. They do not simply “look nostalgic”—they feel truer, because they preserve light as a bearer of shape, not just hue.

What this theory ultimately proposes is not just a change in optical understanding, but a shift in perception itself. To photograph with presence is to see not just objects, but relationships. Not just colors, but coherence. Not just sharpness, but subtle tension across space.

We become students of light—not of its brightness, but of its structure.

This is also why the act of photography can change how we see the world. When we spend hours manually focusing, scanning film, adjusting for micro-contrast, or hunting for that elusive “pop,” we’re not just building technical skill—we’re training our vision. The way we perceive depth, separation, and presence in everyday life becomes more acute. Some photographers have reported, as you have, that even their own phenomenological awareness has changed—that objects seem more distinct, relationships in space more tangible.

We begin to see like lenses see, or perhaps more precisely, like light reveals.

So let us move forward not simply seeking sharper lenses or more accurate sensors, but lenses that honor the spatial language of light. Let us celebrate those tools—old or new—that preserve the wave, the breath, the curve of presence in a photograph.

And let us remember that behind every image worth keeping, there is a moment when light did not just describe the world—it revealed it.


We do not merely see with our eyes—we perceive through light.

When light from the world reaches us, it arrives already encoded with spatial meaning. This meaning does not emerge after color is recognized or objects are identified; rather, it precedes them. It lives in the fine structure of light itself—in its direction, wavefront curvature, coherence, phase variation, and microcontrast. We sense space through light long before we understand what we are seeing.

Modern photography has largely reduced light to color and resolution—measurable, displayable, and flattened. Yet in the analog era of film, and through certain vintage lenses even today, there lingers a trace of something deeper: spatial information not explained by sharpness alone. It is this unseen fourth dimension—this “S” in RGBS—that gives some images their uncanny three-dimensionality.

Double Gauss Zero‑Phase Theory

The sense that they breathe. That they extend. That they live.

When Colours Refuse to Agree: Chromatic Shift and the Zero-Phase Lens

It was as if a house, steady in reality, chose to walk about in the image depending on which colour dominated the light. Under red, the house took one step forward; under green, it leaned back; under blue, it drifted further still. The camera’s sensor received the same house, but the glass whispered three different truths.

Photography has always lived in the tension between physics and perception. A lens bends light into an image, but not all light behaves the same. Each colour is a different wavelength, and glass refracts those wavelengths differently. Red drifts one way, blue another, and green finds its own compromise. Engineers call this chromatic aberration; photographers experience it as focus shift that changes depending on what kind of light dominates the scene.

I first noticed this in practice when using FoCal software to calibrate a DSLR and a fast fifty. The Nikon 50 mm f/1.4G was under test, and FoCal offered the unusual option to chart focus curves for red, green, and blue channels separately. What emerged was startling: three distinct lines, each bending toward a different focal setting. The implication was clear — focus on a red subject and the lens was sharp in one place, but swap to a green-lit subject and sharpness shifted. A blue-lit subject again would fall in a slightly different plane.

This shifting of place can even mimic cosmic phenomena. Consider the “super moon.” Astronomers will explain that the super moon is simply a perigee — the moon being physically closer to Earth in its orbit. But to the casual eye, the moon also looks larger and nearer when the atmosphere filters it, reddening it on the horizon. The perception of closeness may not come only from orbital mechanics, but also from the way wavelength filtering shifts how our eyes and brains lock focus and scale.

The problem is not academic. Nikon’s SB-900 flash, for example, projected a red AF-assist grid in darkness. The autofocus system, obedient, focused perfectly under that red projection. But when the actual flash fired, it was white light — a blend of wavelengths. Suddenly the point of sharpness was elsewhere. The camera had been told a red-truth, but the photograph was judged against a white-truth. The result was systematic front- or back-focus: an error born of colour disagreement.

This is not a flaw of any one brand, but of an era when digital camera design was often driven by electronics and software engineers who lacked deep optical grounding. They knew how to drive a sensor or program a DSP, but not how different wavelengths scatter in glass. Products reached the market with subtle but real contradictions, leaving photographers puzzled and third-party calibration tools scrambling to compensate.

And yet — there is another way. A “zero-phase” lens, to borrow the language of signal processing, brings all colours to the same positional agreement. Whether red, green, or blue dominates, the house does not move. It remains in its rightful place, anchored by glass designed to neutralize wavelength-dependent phase shifts. Such a lens does not eliminate colour, but it aligns colour in space, so that no hue tugs reality into false motion.

Most lenses are compromises, negotiating between wavelengths like diplomats between nations. But a true zero-phase design preserves the dignity of place. The house, the tree, the face — all remain steadfast, no matter what colour of light illuminates them.

This is the essence of optical truth. Not just resolving fine detail, not just suppressing halos or fringing, but keeping the world still. When colours refuse to agree, reality wobbles; when they are reconciled, reality breathes.

Next: From colour shifts in glass to the question of truth in digital imaging — read the iPhone 15 vs. Hexanon comparison.
Cover image from The Last True Lens – iPhone vs Hexanon comparison

The Last True Lens – iPhone 15 vs. Hexanon 40mm

What does it mean to see photographically? This visual study contrasts over 10 scenes photographed with the iPhone 15 Pro Max (2× and Spatial mode) and the Konica Hexanon 40mm f/1.8 on a Nikon Z8 — revealing how vintage optics still carry the truth of space.

Where the phone smooths and simplifies, the Hexanon preserves angular depth, relational geometry, and micro-spatial integrity. This isn’t a nostalgic exercise — it’s a direct look at how computational rendering has changed what a photograph feels like.

  • 10 real-world test scenes — identical composition and light
  • Analysis of spatial collapse vs. analog coherence
  • Hexanon 40mm rendering at f/8 with phase integrity
  • iPhone Spatial mode explored from a fidelity perspective

The Fifth Element – Reexamining the Double Gauss through the Konica Hexanon 40mm f/1.8

This in-depth case study examines the Konica Hexanon 40mm f/1.8 as a living proof of the Double Gauss zero-phase theory. Rather than treating the 40 mm as just another vintage lens, the paper reveals how its geometry, element symmetry, and coating choices converge to preserve spatial coherence. The result: images with a dimensional “breath” that hold angular relationships intact from scene to sensor.

The 50 mm f/1.7 appears here not as a rival, but as a reference point — a way to highlight how small shifts in lens formula can alter phase integrity and perceived depth. The optical layout diagram below shows the construction of both, making the differences immediately visible.

Notably, the 40 mm maintains this truthful depth impression across its entire aperture range — from f/1.8 to f/22 — a rare, aperture-indifferent behavior that further confirms its phase-faithful design.

The 50mm f/1.7 appears here not as a rival, but as a reference point — a way to highlight how small shifts in lens formula can alter phase integrity and perceived depth. The optical layout diagram below shows the construction of both, making the differences immediately visible.

Optical layout comparison: Konica Hexanon 40mm f/1.8 vs 50mm f/1.7

Optical layout comparison: 40mm f/1.8 (top) and 50mm f/1.7 (bottom)

View Full Technical Paper (PDF)

Research Note

The Blob Knows — Rethinking Analog Spatial Coherence in the Age of Discrete Sampling

Placed after Lens of Many Eyes … (unchanged)

EssayAnalog vs DigitalOptics · Perception · DSP~8–10 min read

We argue that analog media stores perception as amorphous … (unchanged)

This HTML version is provided for readability … (unchanged)
Prefer to revisit the optics chapter? Jump to Lens of Many Eyes.
Diagram: Continuous, phase-coherent “blobs” (left) vs a uniform pixel grid (right). The same structure, sliced into cells, loses internal relationships unless sampling, optics, and reconstruction all respect coherence.
Continuous Blobs vs Discrete Pixels Left panel shows overlapping organic blobs with smooth gradients. Right panel shows the same area sampled on a square grid with hard cell boundaries, revealing fragmentation of the original smooth shapes. Analog / Continuous: blobs preserve phase & relationships Digital / Discrete: grid slices coherence into cells
Left: overlapping, phase-coherent regions (“blobs”) carry curvature, micro-contrast, and depth cues as a whole. Right: fixed-grid sampling fragments that structure; unless optics, sampling density, and reconstruction methods are blob-aware, presence turns into mere detail.

1. What the Pixel Missed

Digital images excel at counting—pixels, bits, lines per millimeter. But analog perception doesn’t live in counts; it lives in relationships. Blobs are irregular, soft-edged regions where the signal is encoded across space: gradients, local curvature, and phase-aligned micro-structure. Break the region, and you keep the samples—while losing the coherence that makes space feel real.

2. The Blob Defined

In film and phase-respecting optics, exposure spreads across a neighborhood—chemistry and optics weave a contiguous cluster. This “blob” embeds adjacency: tone rolls, shadow tension, local direction. It is not anti-detail; it’s supra-detail—structure that survives through detail.

3. Blob vs Pixel

A rigid grid discretizes smoothly varying shape into hard bins. Compression then discards “low-contrast” interiors—the very places blob identity lives. Without blob-aware capture and reconstruction, we trade presence for precision: technically sharp, perceptually thin.

4. Lenses That Pass the Blob

Some vintage double-Gauss–derived lenses (e.g., Hexanon 40 mm) preserve wavefront coherence with minimal element counts and balanced phase behavior. They pass the blob intact; space reads as air, not just edges. The point isn’t nostalgia—it’s physics serving perception.

5. Toward Blob-Aware Systems

Design capture and rendering around relational realism: optics that respect phase; sampling that’s dense where gradients bend; compression that preserves low-contrast interiors; displays and tone-maps that keep the S-curve alive. The result isn’t “soft”—it’s spatial.

Download The Blob Knows (PDF)

Mother holding child — opening page from The New Seeing

The New Seeing – Photography That Feels

A companion to our optical research, this book invites you into a gentler practice of photography — one where depth is presence, colour carries memory, and the lens becomes an extension of your senses.

From depth without blur to lenses that paint, from the Feminine Eye to the Gentle Photographer’s Manifesto, it blends field-tested rendering knowledge with an approach that feels as tactile as it looks.

  • Photography you can touch
  • Colour as memory, space as emotion
  • Truthful depth at f/8 and beyond
  • How to curate a portfolio by feeling

The Lens of Many Eyes

Imagine the front lens element not as a single piece of glass, but as if it were made of hundreds of tiny lenses — each with a slightly different vantage point.

Like our two eyes create stereo from small viewpoint shifts, these “micro-eyes” capture micro-differences in angle, shading, and phase. Inside the lens, their rays bend and converge into one coherent image. To the sensor it’s 2D — yet the light still carries spatial structure your brain can read as depth.

  • Surface curvature & aperture geometry mix rays from different zones, embedding tiny angular differences.
  • Symmetry & phase integrity (e.g., Double Gauss) help preserve those relationships — depth feels honest.
  • Result: a flat photograph that still breathes with space.
Read the full concept (PDF)

Rosetta Stone — Translating Blur Talk to Spatial Physics

Photographer words on the left; Lightographer meaning on the right. Tiny sketches hint the mechanism.

“Creamy bokeh”

Background melts; edges gentle.

Translation: out-of-focus cones overlap Gaussian-smooth; phase relations largely preserved.

“Nervous bokeh”

Busy, double lines; shimmering.

Translation: uneven phase offsets between cones → partial interference → chaotic blur texture.

“Glow”

Highlights veil into a haze.

Translation: wavelength chaos: colour components arrive out of phase → repeated rainbow-like halos.

“3D pop”

Subject stands out naturally.

Translation: global cone coherence — peak alignment holds across neighbouring cones, not just one point.

“Fringing (purple/green)”

Coloured halos on hard edges.

Translation: chromatic phase delay — RGB components collapse at different planes → wrong amplitude mix at the pixel.

“Sharp center, soft edges”

Crisp on axis, smeary margins.

Translation: phase alignment holds near the axis; marginal rays accumulate error (coma/astigmatism).

“Character”

A lens has a signature look.

Translation: stable phase/amplitude fingerprint of how the design bends & delays cones across the frame.

Key idea: the feelings photographers describe (creamy, glow, pop) are surface symptoms of how well a lens keeps cone coherence and phase integrity.
Kenneth Blake photographing with a Nikon Z while vintage lenses float around him — tools that preserve space, not just sharpness
Tools of the trade — lenses that preserve space, not just sharpness.

You made it to the end.
If your beliefs feel a little shaken — good.
The world has not moved. Only your lens has.


Kindred Glass: Tamron 52B and Konica Hexanon 40 mm — Parallel Paths Through Light


1 – The Tamron SP 90 mm f/2.5 (52B): The Glass That Remembered

In 1979 Tamron released its first Super Performance lens, the 90 mm f/2.5 macro. The 52B carried a quiet secret: its clarity came not from computer perfection but from the chemistry of the glass itself. Patent clues (n ≈ 1.72 / ν ≈ 52 and n ≈ 1.64 / ν ≈ 35) reveal a pairing of lanthanum-rich crown and barium-dense flint — the same high-index materials that Hoya and Ohara had just made available to Japanese lens makers.

Those elements, balanced in Tamron’s floating-group macro design, cancelled chromatic discord without strangling warmth. The result was a lens that behaved phase-neutral: edges held their distance, colours met in silence, and space felt continuous. By f/8 the 52B transmits light as though the air itself were polished.

It was an achievement of material empathy — glass so well-matched that wavefronts left the barrel already reconciled. Tamron called it “special glass.” In truth it was the first glimpse of zero-phase thinking.1


2 – The Konica Hexanon 40 mm f/1.8 AR: The Fifth Element That Breathed

At almost the same moment, Konishiroku’s engineers were designing a new fast normal for the AR mount. Their Hexanon 40 mm f/1.8 appeared outwardly ordinary, yet inside it hid a brilliant subtlety described in U.S. Patent 4,214,815 — a fifth element, a thin low-index plate placed behind the floating rear group to fine-tune the phase of the exiting wavefront and suppress residual spherical aberration and coma.

The idea came from Toshiko Shimokura, one of Japan’s few female optical designers of the era. Rather than adding more high-index glass to crush residual aberration, she proposed a nearly weightless plate whose curvature would listen to the light, balancing spherical aberration and coma by timing, not brute refraction.

Photographers later noticed what the patent predicted: the 40 mm’s images had unusually calm bokeh, minimal coma, and a sense that the lens “did not shout.” Shimokura had given the design a geometric empathy equal to the Tamron’s chemical one.2


3 – Cross-Currents and Coincidences
  • Common glass sources. In 1977 – 78 Hoya introduced the lanthanum-crown series (LaK8/9) and barium flints (BaK51 etc.), supplying nearly every Japanese manufacturer. It is entirely plausible that both Tamron and Konica received the same melt families from Hoya’s furnaces — one batch feeding Tamron’s macro prototypes, another feeding Konica’s 40 mm tests.
  • Shared coating chemistry. Each lens shows the same gold-violet MgF₂/ZrO₂ multilayer hue and sub-2 % reflectance. Coating vendors such as Nippon Sheet Glass and Sumitomo supplied both companies, ensuring that spectral transmission curves were almost twins.
  • Parallel philosophies. Tamron’s engineers refined glass pairing until the projection “felt alive.” Shimokura refined curvature until the wavefront “breathed.” One tuned matter, the other geometry — both guided by perception more than by computation.

These coincidences built a quiet bridge between two companies that seldom shared branding but clearly shared aesthetic genetics. The Tamron 52B and Konica 40 mm are not siblings by design, yet they speak the same dialect of light.


4 – Where They Differ
AspectTamron SP 90 mm f/2.5 (52B)Konica Hexanon 40 mm f/1.8 AR
Method of harmonyMaterial tuning — lanthanum/barium pairing cancels chromatic drift.Geometric tuning — thin corrective plate adjusts phase curvature.
Aperture personalityAt f/8 it becomes crystalline and spatially infinite.At f/1.8 it glows with interior softness, compressing nothing.
Color temperamentNeutral-warm, restrained, truthful.Romantic-warm, slightly amber.
Spatial moodAir with structure; analytical calm.Air with emotion; lyrical coherence.

Each lens reaches balance from its own direction — one through glass density, the other through wavefront grace. Both reject the late-70s obsession with over-correction and instead embrace phase sympathy.


5 – The Possibility of Influence

Was there direct cross-pollination? Possibly. Tamron was an OEM supplier for many brands, including Konica; engineers frequently met at industry glass seminars hosted by Hoya and Ohara. The new lanthanum melts were the talk of those meetings. It is entirely believable that Konica’s team, hearing of Tamron’s success with the 52B’s glass pairing, decided to explore a lighter, geometric equivalent in their own 40 mm.

Or perhaps it was simply the glass itself that whispered the idea to both. A new material enters the lab, bends light in an unfamiliar way, and two separate teams respond with similar intuition. That’s how revolutions often happen — not through imitation, but through resonance.


6 – What Unites Them
  • Both preserve color coherence instead of chasing saturation.
  • Both maintain phase continuity across focus and aperture.
  • Both were born when Japanese optics shifted from correction to expression.

They are complementary halves of a single experiment:
Can glass, by composition or by shape, carry the memory of how light feels rather than merely how it measures?


7 – Lightographer’s Reflection

Today, when you mount either on a Nikon Z8, the decades collapse. The 52B’s lanthanum heart and the 40 mm’s breathing fifth element still perform the same act: they translate illumination into intimacy.

Tamron achieved it through matter; Konica through gesture. Both revealed, independently yet simultaneously, that the path to spatial truth runs not through aggression but through empathy.

1979 — the year Japanese glass learned to feel.


Sources, Provenance, and the Persistence of Light

Because even light leaves a paper trail.

1. Tamron Co., Ltd. (2024). Tamron SP 90 mm f/2.5 Macro 52B. Lens-DB.com. Retrieved 27 Oct 2025, from https://lens-db.com/tamron-sp-90mm-f25-macro-52b-1979/
— Describes Tamron’s OAC (Optical Aberration Compensator) system and lists the lanthanum-crown / barium-flint glass pair derived from the 1978 Japanese patent (n ≈ 1.72 / ν ≈ 52 and n ≈ 1.64 / ν ≈ 35). A quietly revolutionary pairing that let the 52B render air as structure and colour as equilibrium.

2. Shimokura, T., Matsui, M., Kobayashi, K., & Konishiroku Photo Industry Co., Ltd. (1979). Lens having a fifth element for improved correction of spherical aberration and coma (U.S. Patent No. 4,214,815). Washington, DC: U.S. Patent and Trademark Office. Issued 19 Oct 1979. Credits Toshiko Shimokura for the curvature equations that shape the thin corrective plate (≈ 0.5 – 1 mm thick) — a low-index whisper of glass that balances the wavefront rather than merely increases refractive power. In her mathematics, the 40 mm AR found its signature grace: phase correction by empathy.

“Even a millimeter of glass, if curved with care, can teach light to remember itself.”

Double‑Complementary Lattice Wave Filters

Production‑ready ANSI‑C implementations with coefficient generators (Butterworth, Chebyshev, Cauer), FFT utilities, and a method for phase‑linear output.

  • Highpass + Lowpass pair that recombines to full magnitude (power preserved)
  • Stable in adaptive scenarios; real‑time coefficient updates
  • Filterbanks, multirate, decimation/interpolation proven in the field
Download sample C code
Why zero‑phase matters

In measurement systems (profilometry, telecom), phase‑linear or zero‑phase filtering keeps features where they occurred. A bump in asphalt appears at the physical location, not shifted by group delay. That clarity is the engineering analogue of what Double Gauss symmetry preserves in images.

Selected Projects & Adventures

View all projects
Road Profilometers (AXON1)

Vehicle‑mounted measurement system (Vägverket‑qualified). Linear‑phase IIR pipeline, SELCOM SLS5000 laser, µg‑resolution accelerometer, wheel‑pulse odometry. Measures IRI, RMS bands, MPD, and longitudinal profile at 5–10 cm spacing.

TDMA Power Saver (Ericsson)

Successive power ramp‑down/up with Kaiser windowing to idle time‑slots—prevented spectral splatter while saving energy. Shipped despite being considered “impossible”.

Optical Fibre Ribbon Splicer

Two cameras at 45°. DMA‑driven line capture and geometric remap to a normalised operator view with minimal CPU load. Parallel DSP (TMS320C40) data path.

Software Build Throughput

Demonstrated orders‑of‑magnitude faster builds by reducing hyper‑modularity (too many tiny object files) on legacy toolchains. Practical optimisation across compiler/linker I/O bottlenecks.

FAQ

In symmetrical designs like Double Gauss, phase errors can cancel so wavefronts pass without spatial shift. The result is preserved angular relationships and trustworthy depth.

Yes—if they preserve relationships. Over‑compression of transition zones or heavy local‑contrast tricks can make scenes read planar, reducing SRI.

Contact

Oberon Data och Elektronik AB

  • ken @ oberon.se
  • ken.blake @ protonmail.com (use your own Proton account for secure thread)

We don’t use cookies. Your message isn’t shared or sold.