In an article (AP 28 June 2022) I wrote about the fundamental differences in the way that autofocus works in mirrorless and SLR cameras. This applies even in cases where the cameras apparently use the same phase detection autofocus method. It arises from the fact that the specialist autofocus sensor unit in an SLR is capable of making a much more precise measurement of subject distance than the adapted imaging pixels that are used in a mirrorless camera.

The outcome of this is that mirrorless phase-detection auto-focus (PDAF) is in fact a hybrid system – whereby phase detection is used for large adjustments in focus position, and contrast detection used to make fine adjustments. It also affects the way that subject tracking works.

In an SLR, this is by calculating and extrapolating subject movement based on location (relative to the array of focus points) and distance, from the exact measurement given by the AF sensor unit. A mirrorless camera typically has more focus points, allowing a better estimate of subject position, but does not have access to precise subject distance. It compensates for this by using subject recognition to identify and track the subject.

Phase detection auto focus Panasonic Lumix S5II sensor

While the S5II’s full-frame sensor has the same 24MP resolution as the S5’s , it now include phase detection elements for autofocus. Image credit: Andy Westlake

The outcome of this is that the best way of working with each type of camera can be quite different.

I had a chance to find this out recently when taking photos at a cricket match. In this sport, some shots are quite simple with respect to autofocus. Success requires tracking the intended subject. Sometimes this is quite simple, for instance when photographing a batter. Generally, they don’t move too far from the crease, and that small movement can be tracked quite easily.

But to photograph the bowler at the point of release requires tracking during the run-up and through the bowling action. I found that mostly, my mirrorless camera (a Nikon Z9) could fulfil both these tasks with comparative ease.

There was a problem, however. When the bowler’s run-up went behind the umpire or the off-strike batter, the focus point would stick on that person instead of tracking the bowler. This was not a problem I ever had with a DSLR.

It’s worthwhile considering what was happening. A DSLR tracks focus on the basis of relatively little information. The information is the location of the subject in three very rough dimensions, position within the frame as given by which of a relatively small number of focus points is being used, and subject distance, as measured by the AF sensor unit.

It uses the first two from successive readings to calculate the trajectory of the subject across the frame, and the last one to confirm that it has the correct subject when it captures it at the next reading. When tracking a bowler passing behind another person, the subject distance is near enough to provide that confirmation and it continues to make another estimate, and thus tracks successfully.

The mirrorless camera has more focus points and uses subject recognition to confirm that it has the right subject. But this fails when the bowler passes out of view. So the camera searches for the nearest subject that looks like the one it lost – in this case the umpire or batter.

The solution was to change my technique a little and start my tracking just as the bowler emerged. The subject recognition was able to lock instantaneously and give me enough time to photograph the bowling action.


Portrait of Bob Newman in black and whiteBob Newman is currently a Professor of Computer Science at the University of Wolverhampton. He has been working with the design and development of high-technology equipment for 35 years and two of his products have won innovation awards. Bob is also a camera nut and a keen amateur photographer.


Related Reading:


Follow AP on FacebookTwitterInstagramYouTube and TikTok.