CT was introduced into clinical practice in 1973, and was the first diagnostic imaging modality that permitted the direct visualization of soft tissues in the brain. CT was also one of the first medical modalities that made use of computers to acquire and process imaging information. There is little doubt that the advent of computers and CT in the early 1970’s heralded a major revolution in medical imaging. Medical imaging in the 21st century is both qualitatively and quantitatively different from medical imaging in the first 70 years following the discovery of x-rays by Roentgen in 1895.

A single projection is the detected distribution of x-ray intensities transmitted through a patient at one angle of the x-ray tube, and typically contains 700 or so discrete values. A single CT image can be obtained by acquiring up to ~1000 or so projections at all the angles around the patient (i.e., 360º). Image reconstruction is normally performed using filtered back projection techniques, whereby each projection is convolved with a “filter” before being back projected.

Characteristics of the resultant image will always depend on the choices made by the operator as to how the projection data are to be acquired. Important choices pertain to radiographic techniques (e.g., mA, rotation time, and kV), irradiation geometry (e.g., slice thickness and pitch ratio), as well as choices for image reconstruction (e.g., display field of view [FOV] and reconstruction filter). The effects of these parameters on important image characteristics (e.g., spatial resolution and image mottle/noise) are illustrated below.