Gigapan Math, Field of View, Pixels per Degree, and More

This is a Gigapan image. Drag the image to pan, double click to zoom in, shift double click to zoom out, or open full screen viewer or view on gigapan.org

May 30, 2008
Shooting multiple shot panoramas gives photographers control over aspect ratio (or shape) and resolution in a way which is otherwise impractical. With a normal camera you pretty much get a 4:3 image. You can shoot it portrait mode and get 3:4, or if you are daring go cinematic and shoot 3:2 or 2:3, but given the world of possibilities that isn't much range!

You can always crop the image to an arbitrary shape, but that is a metaphorical cookie cutter approach. You end up with scraps of image on the sides, and a thin cookie. Multi shot panoramas are more like making a double, triple, or centi-(?) batch of cookie dough, and then squishing it around your kitchen to fill every (rectangular) corner!

With the Gigapan you choose the upper left and lower right corners for your image. You can make a wide short image, or a tall skinny one, or anything in between!

You also get to think about the resolution of your image. You could simply shoot everything in your highest zoom, but that seems wasteful. Instead you can decide things like 'how many pixels do I want for every inch of the subject?'

Here are some notes on some of those calculations.

Angle or Field of View
http://en.wikipedia.org/wiki/Angle_of_view
 
Angle = 2 * Arc Tan( d / 2f )
where d = size of the sensor and f is the focal length of the lens.
doing everything in 35mm equivalents means the 'sensor' is 36mm x 24mm

You can determine how much will be included in a given angle of view at a given distance with this
 included = d * sin(a)
 where d = distance and a = angle of view.
 Example, the 4.8 degree horizontal angle of view of a 432 mm lens at 100 feet
 included = 100 * sin(4.8) = 8.32 feet
(note: because of overlap this is not the same as the number of images you will need!)
Pixels per degree
pixels = pixels / angle of view
pixels per inch
calculated at a given distance, say 100 ft.
 pixels per inch = p / (d * sin(a)) / 12
 where p = horizontal or vertical pixel resolution of the image
 d = distance to subject
 a = horizontal or vertical angle of view
 Example: a 432 mm lens has a 4.8 degree Angle of View. In the 8 MP Canon S5IS that is
 pixels per inch = 3264 / ( 100 * sin(4.8)) /12
 pixels per inch = 32.7
Inches per Pixel
how many inches does each pixel represent, at a given distance. The inverse of pixels per inch

Update Mar 8, 2012

Creating the Angle or Field of View from a known subject width and distance

(in progress, I think this is right, but not guaranteed)
  • Set camera on solid mount - tripod, or GigaMacro rig
  • Take a picture of a ruler or target with known distances *
  • Measure/Calculate the distance to the subject
  • Now make a right triangle: a = distance to subject b = 1/2 the width you measured of the ruler hypotenuse is then sqrt(a^2+b^2) sin(FOV/2) = b/hypotenuse FOV = 2*asin(b/hypotenuse)
    * For the microscope work we used the USAF target and were able to select which pairs of calibrated lines were 'clear enough,' and then used ImageJ to measure the number of pixels which that known distance took, and then use ImageJ to create calibration.

    Assume we have an approximate known distance, say red blood cells which are 6-8 microns across. That is a lot of range, but we can work with it. Select a few RBC which seem 'typical' and measure how many pixels wide they are. Say they are 12 pixels wide, then we have
    12 pixels/ 8-10 microns
    The range then is 1 pixel maps to 1.5-2 microns.