18

Quality Control for Computed Tomography Scanners

Learning Objectives
Key Terms to Watch for and Remember

What is Quality Control?

What is quality control (QC), and how does it relate to computed tomography (CT) scanners? For CT scanners, QC may be defined as a program that periodically tests the performance of a CT scanner and compares its performance with some standard. If the scanner is performing suboptimally, then steps must be initiated to correct the problem. The goal of a QC program is to ensure that every image created by the CT scanner is a quality image, with minimum radiation dose to patients and personnel. High-quality images provide the radiologist maximum information, improve the chances for correct diagnosis, and ultimately contribute to quality patient care.
This chapter describes a generic QC program that can be adapted to almost any CT scanner system. As part of the purchase package, CT manufacturers often prescribe a daily QC program for use on their CT systems. Sometimes their QC program includes a specially furnished phantom or test object to be imaged with selected techniques. Descriptions of the testing instruments, an outline of necessary measurements, and hints on interpretation of the results are presented. In some cases, internal software is used to interpret the measurements and to notify the operator of unsatisfactory results.

Essential Steps in a QC Program

Bushong (2013) and others (IAEA, 2012; Papp, 2015) have noted that QC involves three fundamental steps:
1. Acceptance testing
2. Routine performance evaluation
3. Error correction
Acceptance testing basically verifies compliance, that is, whether the CT equipment meets manufacturers’ specifications (usually as outlined in the hospital’s request for proposal to purchase a CT scanner) and ensuring that it operates efficiently in terms of various outputs, such as image-quality requirements and dose output. In general, acceptance testing is usually done by qualified medical physicists or others with a firm knowledge of how CT scanners work. Acceptance testing generally includes that the features ordered are delivered and there is mechanical and electrical integrity, stability, safety of various components (interlocks and power drives, for example), and CT dose and imaging performance. Acceptance tests, for example, may include verification of slice thickness, CT number linearity, uniformity, spatial and contrast resolution, noise, and dose output.
Routine performance evaluation refers to monitoring the components of the CT scanner that affect dose and image quality. Monitoring includes QC tests that are performed on a daily, weekly, monthly, and yearly basis. This is part of an ongoing QC process. In general, technologists are quite active in routine performance testing of the CT scanner. Some tests, however, require the expertise of the medical physicist. Some authorities such as the American College of Radiology (ACR), the International Atomic Energy Agency (IAEA), and Health Canada often prescribe specific tests that should be done by the technologist and those that should be done by the medical physicist. For example, the ACR (2012) suggested that the following tests be done by the CT technologist:
1. CT number for water and standard deviation (noise)
2. Artifact evaluation
3. Display monitor QC
4. Visual inspection of certain components of the scanner
In addition, the IAEA noted that the following tests be conducted by the technologist:
1. CT alignment lights
2. Scanned projection radiography (SPR) accuracy, also referred to as “scout view,” “topogram,” “scanogram,” or “pilot” (CT vendor terminology)
3. CT number, image noise, image uniformity, and image artifacts
4. Image display and printing
5. External CT positioning lasers
6. Couch top alignment and positional accuracy
7. CT number of multiple materials
The IAEA (2012) also suggested that the medical physicist should conduct specific QC tests and provide advice to CT departments on the following:
1. CT image quality and radiation protection considerations not only for the patient, but for personnel and members of the public
2. Acquisition and installations of CT scanners, including shielding considerations
3. Performance of specific QC tests (recommended for the medical physicist) on various components of the CT scanner, including acceptance testing and routine performance QC tests
4. Dose optimization
5. The specific responsibilities of the technologist in conducting the tests as required by the QC program

Equipment and Phantoms for QC Testing

Image performance phantoms are used to verify the performance of the CT scanner on image-quality characteristics such as high- and low-contrast spatial resolution and noise. Geometric accuracy phantoms include tools to measure SPR accuracy and CT laser accuracy. On the other hand, dosimetry phantoms are used to assess the CT dose output such as the computed tomography dose index (CTDI). Finally, quantitative CT phantoms are used to measure the accuracy of CT numbers.
While some phantoms are included with the CT scanner when delivered to the facility, others are available commercially. Two examples of phantoms include the ACR CT accreditation phantom, the Gammex 464 available from Gammex, Middleton, Wisconsin, and the Catphan 700, a commercially available phantom from The Phantom Laboratory in New York.

ACR CT Accreditation Phantom

The ACR accreditation phantom is described in detail by the ACR (2015) for use in their voluntary CT accreditation program, which is intended to demonstrate to patients and personnel operating the CT facility the safety and effectiveness of their CT imaging services.
The phantom is based on Solid Water construction to ensure stability and reproducibility of QC tests conducted over time. An important feature of this phantom is that it consists of the four modules illustrated in Figure 18-1, which are designed to measure positioning accuracy, CT number accuracy, slice thickness, light accuracy alignment, low-contrast resolution, CT number uniformity, and high-contrast resolution. Furthermore, the phantom is marked with the words “HEAD,” “FOOT,” and “TOP” (Fig. 18-1, A) to assist with proper alignment during use. An interesting and comprehensive article on the ACR CT accreditation phantom is one by McCollough et al. (2004). This article outlines practical tips, artfact examples, and pitfalls to avoid. Furthermore the essential criteria (tolerance or acceptance limits) are still the same. Last but certainly not least, the essential specifications and characteristics of the ACR CT accreditation phantom are listed in Table 18-1.

The Catphan 700

There are several CT QC phantoms available for CT QC testing, including those provided by CT manufacturers and included in the purchase of the scanner. It is not within the scope of this chapter to describe details of all of these phantoms; however, essential features of one such commercially available phantom, the Catphan 700, will be highlighted.
The Catphan 700 Phantom (Fig. 18-2, A) was designed for The Phantom Laboratory (2015) with the assistance of Dr. David Goodenough for QC of CT volume scanners. As noted by The Phantom Laboratory, “. . . the phantom retains many of the tests and features offered in the other Catphan® models. . . .” This CT QC phantom includes three solid-cast test modules (Fig. 18-2, B) positioned inside the housing of the phantom such as sensitometry slice geometry module (Fig. 18-3, A), a 30 line pair per centimeter (lp/cm) module (Fig. 18-3, B), and a low-contrast sphere array (Fig. 18-3, C). In addition, the phantom also contains the new wave insert for measuring slice geometry and resolution across the scan area (Fig. 18-3, D). For more complete details of the Catphan 700 Phantom, the interested reader should contact The Phantom Laboratory (2015).

TABLE 18-1

Specifications of the ACR CT Accreditation Phantom

Phantom Construction
Matrix materialSolid Water, 0 ± 5 HU
Length16 cm (6.30 inches)
Diameter20 cm (7.88 inches)
Weight5.3 kg (11.75 pounds)
Imbedded Test Objects
Water equivalent linearity rodSolid Water, 0 HU
Bone equivalent linearity rod955 HU bone tissue-equivalent material
Acrylic linearity rodCast acrylic
Polyethylene linearity rodLow-density polyethylene
Low-contrast module matrixCiba Geigy CB4 epoxy or equivalent
Low-contrast rodsCiba Geigy CB4 epoxy (density adjusted to yield 6 ± 0.5 HU difference) or equivalent
Tungsten carbide beads0.011 in diameter grade 25 tungsten carbide beads
Line pair material6061 aluminum and polystyrene
Steel beads1.00 mm grade 25 chrome steel balls
Intramodule homogeneityThe mean ROI values within any module, test objects excluded, can differ by no more than 2 HU
Intraphantom homogeneity, modules 1, 3, and 4The average CT number of a module must meet the requirements of 0 ± 5 HU

image

(Courtesy Gammex Inc., Middleton, Wis.)

Why Quality Control?

The answer to the question, “Is a QC program needed for the CT scanner?” is usually “yes.” Modern hospitals and clinics that operate CT scanners and other complex instruments have long recognized the value of QC programs to maintain high performance standards for their patients. In addition, regulatory agencies increasingly demand that some standard of quality be maintained on x-ray units and other equipment that can potentially harm patients if the units are not performing optimally. These agencies frequently require that institutions using CT scanners verify the scanner’s performance periodically (e.g., daily, weekly, monthly, annually) and often prescribe alternative measures if the performance standards are not met. To meet these regulatory requirements, a QC program must be operational.

Three Tenets of Quality Control

Three basic tenets of quality control are as follows:

Quality Control Tests for CT Scanners

Historically, current QC tests have evolved from those published in the literature (Burkhart et al., 1987; Cacak, 1985; Cacak & Hendee, 1979; McCollough et al., 2004) and described in Report 99 of the National Council on Radiation Protection and Measurement (1988). More complex physics tests are outlined in Report 39 of the American Association of Physics in Medicine (AAPM, 1993). Additionally, Health Canada (2008) outlined CT QC tests for CT facilities in Canada, for the purposes of ensuring that CT scanners meet national guidelines to protect patients, personnel, and members of the public from radiation dose from CT scanners. Additionally, AAPM Report No. 111: Comprehensive Methodology for the Evaluation of Radiation Dose in X-Ray Computed Tomography (2010) outlined “a new measurement paradigm based on a unified theory for axial, helical, fan-beam, and cone-beam scanning with or without longitudinal translation of the patient table.” This report, as noted by the AAPM, is to “make available an accurate approach for specifying CT radiation doses based on a theoretically coherent measurement methodology that can be readily implemented by medical physicists and that can be supported by manufacturers, standards developers, and regulators.”
More recently, Nute et al. (2013) in a study that examined the “evaluation of over 100 scanner-years of computed tomography daily quality control data.” the authors concluded that the

Choosing a Technique for Quality Control Measurements

The selection of technique for the QC tests depends on the type of CT scanner and the test being performed. Many variables can be selected for each test, including peak kilovoltage (kVp), milliamperage (mA), scan time, slice width, type of algorithm, x-ray filter type, and focal spot size. The number of possible combinations of techniques is usually overwhelming, and the best that can be done is to select one or two representative techniques. In general, the technique should remain the same for a specific test from day to day. However, the technique for one test does not have to be the same as the technique for other tests. A good rule of thumb is to use a technique that matches a frequently used clinical technique. One way to select a QC technique is to choose the most frequently used head or body technique and use it for the tests. As many tests as possible should be performed with this technique, with the understanding that deviations may be required for some tests.

Test Priority and Test Frequency

It is usually necessary to limit the more complex tests to annual surveys, those occasions when the CT scanner is initially tested for acceptance, and subsequent occasions when the deterioration of image quality is suspected. It is good practice to repeat appropriate tests after replacement of a major component such as an x-ray tube or after the performance of extensive service or adjustments. If data from CT scanner images are used quantitatively or if the precision of an image is used for accurately localizing tissue (e.g., to perform biopsies or plan radiation therapy treatment), the frequency of appropriate tests should be increased.

Limits of a “Passing” Test

What are acceptable limits? How big should the window of acceptable values be for each test described here before the CT scanner is considered “out of tolerance”? These complex questions depend on the technology of the unit tested, the type of test instrument used, and the imaging technique.
Perhaps more important than the actual value of the measured variable is a change in the variable between measurements. A CT scanner that is operating the same today as it did yesterday should produce nearly the same results when the test is repeated. After acceptable limits are established, a quick inspection of the measurements can identify deviant values. Past history can provide good insight into what the values have been. A range that includes most values when the unit was operating optimally can be easily determined from an inspection of past values. Of course, it is never absolutely certain that the CT scanner was operating optimally in the past when these supposedly “good” readings were taken. But if the readings were taken when the unit was new or believed to be functioning well, then they can be presumed to be “good” readings and used as a standard.
The IAEA (2012) provided two “performance standards” for QC test results. These include acceptable and achievable. While acceptable “indicates that performance must be within certain tolerances, and if it is not, the equipment should not be used,” achievable “indicates the level of performance that should be attained under favorable circumstances, which is the level at which a facility should work if feasible.”
It is important to realize that various authorities such as the ACR, the IAEA, and RPB-HC provide acceptance or tolerance limits that may not exactly be the same as the generic ones mentioned earlier. For example, the acceptable limits for three QC tests (CT number accuracy, CT number uniformity, and radiation dose) from these three organizations are given in Table 18-2.

Quality Control Tests

TABLE 18-2

Acceptance Criteria or Tolerance Limits for Three CT QC Tests from one International Organization (IAEA) and Two National Organizations (ACR and RPB-HC)

QC TestsACCEPTANCE CRITERIA OR TOLERANCE LIMITS
ACRIAEARPB-HC
CT numberaccuracy0 ± 7 HU
(±5 HU preferred)
0 ± 5 HU0 ± 4 HU
CT number uniformityThe center CT number must be between ±7 HU (±5 HU preferred)
Adult abdomen protocol must be used
±10 HUCT number for water must not be greater than ±5 HU from the center of the phantom to the periphery
Radiation doseThe CTDIw should not exceed 35 mGy for the adult abdomen protocol and CTDI±20% of manufacturer’s specificationCTDIw, which must be within ±20% of the established baseline values and the manufacturer’s specifications; it is highly recommended to strive for an agreement with manufacturers’ specification of ±10%

image

ACR CT Accreditation QC Tests

These tests are described in detail by the ACR (2012) and by McCollough et al. (2004). As noted earlier in the chapter, the ACR CT Accreditation program requires the use of the ACR phantom and the following QC tests using the four modules that make up the phantom. These tests include CT number accuracy, slice thickness, light accuracy alignment, low-contrast resolution, CT number uniformity, and high-contrast resolution. The essential acceptance criteria for these tests are provided in Table 18-3.
Furthermore, CT number linearity and radiation dose must also be included in these tests. As described in Chapter 9, linearity refers to the relationship of CT numbers to the linear attenuation coefficients of the object to be imaged. The average CT numbers can also be plotted as a function of the attenuation coefficients of the phantom materials. The relationship should be a straight line (see Fig. 9-26) if the scanner is in good working order (Bushong, 2013). For radiation dose the CTDIvol should be measured and should be below the pass/fail criterion of 80 milligray (mGy) for the adult head protocol, 30 mGy for the adult abdomen protocol, and 25 mGy for the pediatric abdomen protocol (ACR, 2012; Papp, 2015; Wolbarst et al., 2013).

TABLE 18-3

Acceptance Criteria for the ACR CT Accreditation QC Tests Using the ACR Phantom

ACR QC TestsEssential Acceptance Criteria
Light accuracy alignmentThe light field and x-ray field should coincide to within 2 mm
High-contrast resolution (spatial resolution)

1. For the adult abdomen: 5 lp/cm must be clearly seen

2. For the adult chest: 6 lp/cm must be clearly seen

Low-contrast resolutionAll four 6 mm cylinders must be clearly seen when using the adult abdomen and head CT protocols
Image uniformity

1. ±7 HU (±5 HU preferred) for adult protocol

2. Edge to center mean CT number difference must be less than 5 HU for the four edge locations

NoiseResults should be compared with the specifications of the Manufacturer
CT number accuracy

• Bone = +850 and +970 HU

• Acrylic = +110 and +130 HU

• Water = ±7 HU (±5 HU preferred)

• Polyethylene = 107 and 87 HU

• Air = 1005 and 970 HU

Slice thicknessMust be within 1.5 mm of the prescribed slice width

image

Data from ACR. (2012). 2012 computed tomography manual: radiologic technologists’ section. <http://www.acr.org/Quality-Safety/accreditation/CT> Accessed February 2015.

Visual Inspection

A visual inspection component of a CT QC program ensures that the integrity of radiation protection considerations, including patient safety concerns (ACR 2012; IAEA, 2012), is maintained. In general, a checklist of items should be used since it provides a good tool to record the status of the scanner components. While the ACR assigns this task to the technologist, the IAEA assigns this task to the medical physicist.
The ACR (2012) recommended that visual inspection be performed on a monthly basis and should include items such as the following:
1. Check the functioning of the table height indicator, the table position and angulation indicators, laser localization light, x-ray on indicator, exposure switch, door interlocks, control panel switches, lights and meters, and the intercom system.
2. Check high voltage and other cables for fraying.
3. Check the window width and window level display.
4. Check for the presence of warning labels and availability of all service records.
The IAEA (2012), on the other hand, suggested that the following items be evaluated:
1. Scan protocol book
2. Items (such as QC tests performed and corrective action taken) in the quality control log book are well documented for currency
3. Cleanliness of the scanner room
4. Door of the scanner room operates properly
5. Radiation protection lead shields are available
6. Emergency equipment is available
7. Window between operator’s room and scanner room provides clear view of patient (no clutter on the window)
8. Image headers are correct, including patient identification, date, and time
9. Written radiation safety procedures up to date
10. Staff (including radiologists) understands and follows radiation safety procedures (IAEA, 2012)

Artifact Evaluation

Proposed QC Tests

There are 18 QC tests described below ranging from CT number calibration, standard deviation of CT number in water, high-contrast resolution, low-contrast resolution, accuracy of distance-measuring device, uniformity or flatness of CT number, hard copy output, accuracy of localization device, CT bed indexing and CT bed backlash, and light field accuracy to pitch and slice width (spiral/helical scanner), CT number versus patient position, CT number versus patient size, CT number versus algorithm, CT number versus slice width, noise characteristics, and radiation scatter and leakage. Each of these is described next. It is important to note that since the use of film has become obsolete, computed radiography (CR) cassettes can be used in the tests that mention the use of film.
• TEST 1: Average CT number of the water (CT number calibration)
Phantom or equipment: A simple water-filled cylindrical plastic container approximately 20 cm in diameter. Commercial phantoms are available for this test and are often provided by the CT manufacturer, but some institutions have used 1-gallon plastic containers from liquid laundry bleach. The bleach, of course, is replaced with water.
Two media that serve as calibration points for CT numbers are water and air. Occasionally (e.g., once a month), move the ROI outside the phantom into the region of the image that is known to contain air. Check the average CT number of air. It should be 1000 if the CT scanner is calibrated properly.
Expected results: The average CT number of water should be very close to zero.
Acceptance limits: If the average CT number of water is more than three CT numbers away from 0 (i.e., outside the range 3 to +3 Hounsfield units [HU]), the CT scanner fails the test. The CT number of air should be 1000 ± 5 HU.
Measurement: Use the same image as in Test 1 (Fig. 18-4). Position the ROI near the center of the phantom image and measure the standard deviation of the CT number.
Expected results: Typical values are in the range of two to seven CT numbers. The actual value will depend on the dose at the location of the ROI, which depends on the kVp, mA, scan duration, slice width, and phantom size. The standard deviation of the CT number also depends on the type of reconstruction algorithm (can be higher with sharp algorithm compared with smooth algorithm) and the position of the ROI (slightly smaller at the edge of the phantom compared with the center). Ensure that the technique is the same each day and that the standard deviation is measured at the same place each time (e.g., the center of the phantom).
image
FIGURE 18-6 CT image of the high-contrast test pattern shown in Figure 18-5. The CT scanner is judged by the smallest row in which all five holes can be seen (arrow).
Measurement: On the axial CT image (Fig. 18-6), determine the smallest row of holes in which all holes can be clearly seen. The smaller the holes that can be clearly seen, the better the performance of the CT scanner. Be certain that all holes can be seen in the image. Sometimes it appears that one fewer hole is seen in the row than is actually in the phantom. This is usually a phase-reversal phenomenon and should not be counted as a complete set of holes. On the other hand, the bar pattern where the bars and spaces are distinctly visualized indicates the level of high-contrast resolution.
Expected results: Most modern CT scanners have a high-contrast resolving power slightly smaller than 1 millimeter (mm) by using a typical head image technique. Therefore, they will be able to visualize a complete set of holes in some of the rows in the range of 0.75 to 1.0 mm. With the highest resolution technique available to a particular scanner, some CT scanner manufacturers claim to be able to visualize holes as small as 0.25 mm.
Acceptance limits: This baseline number should be established at the time of the acceptance test when the CT scanner is working well by scanning the phantom and noting the smallest set of holes that can be seen. This initial measurement becomes the baseline for future tests. Subsequent tests can be compared with this baseline. Alternatively, the manufacturer’s specifications for this test can be used to verify that the performance of the CT scanner is at least as good as the specifications.
Possible causes of failure: Enlarged focal spot in the x-ray tube, excessive mechanical wear in the motion of the gantry, mechanical misalignments or poor registration of electromechanical components, vibrations, or detector failures. If the resolution has degraded from the baseline, inform the service person.
Frequency: This should be performed at the time of installation as part of the acceptance test and biannually thereafter.
• TEST 4: Low-contrast resolution
Phantom or equipment: Low-contrast resolution pattern in imaging phantom. A quick and easy test pattern of low-contrast test objects consists of a series of holes (2 to 8 mm diameter) drilled in polystyrene. The holes are filled with liquid (often water) to which has been added a small amount of some other material (usually methanol or sucrose) to bring the liquid’s CT number close (about 0.5% different) to that of the plastic itself. One such pattern (Fig. 18-7) consists of a series of rows of holes drilled in relatively thick plastic. Each row contains holes of a constant diameter. The holes decrease in size from one row to the next. In a CT image, the holes appear to have a density similar to their surround (i.e., the holes have low contrast).
Another technique is to use partial volume averaging by making the plastic very thin (e.g., a plastic membrane). Low contrast in the image is achieved by a different principle than the solid plastic type of low-contrast phantom. The membrane type of phantom consists of a thin membrane containing a pattern of holes (the same pattern shown in Fig. 18-7). The membrane is stretched across a plane of the phantom and is then immersed in water. The CT x-ray beam, as visualized from its edge (Fig. 18-8), strikes mostly water. But a small fraction of the beam is absorbed by the plastic, forming a faint (low-contrast) image of the hole pattern. By varying the thickness of the plastic relative to the width of the x-ray beam, the contrast can be varied.
Expected results: The smallest holes that can be imaged by modern CT scanners should be 4 to 5 mm in diameter or smaller for 0.5% contrast objects. Perhaps more important, the minimum size of holes visualized should not increase over the life of the scanner.
Some institutions have used an image of a regular square grid that covers most of the field of view. One source of square grids is a type of fluorescent light fixture that uses square grid light diffusers made of plastic with a square spacing of about 0.5 inch (12 mm). With a moderate amount of effort, the grids can be cut to fit inside a phantom or scanned in air (no phantom).
Measurement: With use of the distance-measuring feature available on the video monitors of most scanners, measure the distance between two well-visualized holes near the periphery of the phantom, one near the top and one near the bottom (Fig. 18-11). Repeat the measurement between two holes, moving right to left. If required, a diagonal measurement between two holes can be made and the true distance can be calculated by the Pythagorean Theorem.
Measurement: Using the ROI feature available on most CT scanners, measure the CT number of water near the top, bottom, right, and left of the phantom (Fig. 18-12). Use an ROI large enough to cover an area of 200 to 300 pixels. Compare with the measurement in Test 1.
Expected results: Ideally, the CT number of water will be zero at all points in the phantom.
Acceptance limits: If the CT number anywhere in the phantom differs by more than five CT numbers from the average CT number collected from all measurements, then the CT image does not have a uniform or flat image. If the CT number is high in the center and low near the perimeter of the phantom, the image exhibits capping. A low value in the center relative to the edges exhibits cupping.
Expected results: The same image should be reproduced with a the hard copy device each time the image is recorded.
Acceptance limits: If the hard copy image is unable to display both 5% and 95% patches, examine the display setting and also the printer setting. If the condition still exists, then investigate further with a service person to reset the hard copy printer settings.
Possible causes of failure: Most frequently, drifts in the optical density of films from the camera can be traced to problems in the film processing. However, if the processor has been eliminated as a source of the problem, then the camera must be assumed to be the errant instrument. Sometimes the video monitor, laser, or other light device used to expose the film has changed its output. In this case a service person should be called for repairs.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.
• TEST 8: Accuracy of localization device
Phantom or equipment: A test object with a target that can be aimed for in the localization image and a gauge that indicates how far the resulting CT images fall from the target. One example of this phantom is a set of two small holes drilled in plastic that are perpendicular to each other but at 45 degrees to the plane of the image. A cross-sectional drawing of the device is shown in Figure 18-14. The two holes are offset slightly and do not intersect. The localization target is centered on the point where the two holes appear to intersect in the localization image, and a scan is performed. After the CT image is reconstructed, the holes should appear directly opposite each other with perfect alignment between the holes. If there is offset in the holes, the scan is not being performed where the localization image shows it to be.
Measurement: Image the phantom by using the localization device (sometimes called a scout or targeting image). With this localization image, set up the scanner to make a single scan at a certain thickness so that the center of the scan is directly on the intersection of the holes. Make a scan and reconstruct the image. At the very least, both holes should appear in the CT image. If they do not, then the localization device is so poorly adjusted that the width of the x-ray beam does not intersect the plastic section in which the holes are drilled. If the localization device is working properly, the image of the two holes should appear exactly side by side (Fig. 18-15). If the holes are not aligned, then the center of the slice is off target. The distance that the center of the CT image is located from its targeted position (the intersection of the holes) can be quantified by measuring the amount of offset of the two holes in the image. By using the distance-measuring device on the video monitor (the measurement can be made with a ruler on the video monitor or on the hard copy image if there is no distortion in these devices and if appropriate compensation for the magnification of the image is made), measure the distance from the tip of one hole to the tip of the other hole (Fig. 18-16). The distance that the center of the CT slice is from the targeted location is equal to the length L.
Measurement: Tape a sheet of Ready-pak film to the patient bed. Raise the patient bed so that the film is approximately centered (vertically) in the gantry opening. Turn on the external or internal light field (some CT scanners use a laser beam) that indicates the location of the first scan. With a needle or other sharp object (e.g., a penknife), poke two very small holes through the paper wrapper of the film and into the film (Fig. 18-19). The two holes should be exactly on top of the light field, with one hole near the left edge of the film and the other on the right edge. These holes, which will be visible after the film is processed, will indicate the location of the light field.
If an external light field was used, move the bed into position for the first scan. Make a medium-technique scan with the slice width set to the minimal width. The radiation should produce a narrow dark band on the film that indicates where the radiation struck the film. Process the film and examine the location of the dark band relative to the two pinholes.
Expected results: If the light field is correctly centered on the radiation field, which is also the position of the image, the dark exposed band caused by the radiation should be centered on both pinholes.
Acceptance limits: The light field should be coincident with (i.e., on top of) the radiation field to within 2 mm.
Possible causes of failure: Often the optical field light system is out of alignment. Less frequently, the x-ray tube may have been installed off center. Notify your service person.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.
• TEST 12: Pitch and slice width (spiral/helical scanner)
Note: A single test may be used to determine both the slice width and pitch of spiral/helical CT scanners. For a CT scanner with a single array of detectors, the pitch is defined as the ratio of bed movement (mm) that occurs during one complete revolution to the slice width (mm). For CT scanners with a single array of detectors, the slice width is determined by collimator spacing.
From this same set of images, the slice overlap or gap may be determined. To do this, overlay any two adjacent images, electronically if possible. If the images cannot be added electronically (some scanners do not have this feature), then make a film copy of the two images. Cut the adjacent images from the hard copy film and manually overlay them on a viewbox.
Expected results: First, the beam width measured from the image should agree with the set or nominal beam width by use of a technique similar to that described in Test 12. Next, examine the images for correct pitch by looking at the overlaid images. The image of the wires (inclined at 45 degrees) should appear in different positions in the two images. If the bed indexing is exactly the same as the slice width, the images of the wire segments should just touch at the two ends of the wire that are closest to each other. If the ends of the wire seem to overlap as shown in Figure 18-20, B, this indicates that the adjacent slices also overlap. If the two images of the wires do not touch at the ends as shown at the bottom of Figure 18-20, C, the adjacent slices also have a gap between them. Ideally, the ends of the wires will just touch. Either overlap or a gap indicates that the bed indexing is not the same as the slice width. If the bed-indexing test (Test 9) verifies the bed-indexing accuracy, then the slice width is usually at fault.
Acceptance limits: For slice widths 7 mm and greater, the measured slice width should agree with the nominal slice width to within 2 mm or less. The gap or overlap between adjacent slices should be less than 3 mm. Unfortunately, at narrower slice widths and bed-index settings, the discrepancy between nominal and measured often becomes greater and these values may be relaxed somewhat.
Possible causes of failure: Errors in beam width are usually caused by miscalibration of the mechanism (e.g., shutters or collimators) that collimates the portion of the x-ray beam reaching the detectors. Overlap or gaps in adjacent images or improper pitch settings may be caused by inaccuracies in the bed indexing (see Test 9) or more frequently, inaccuracy in the slice width setting. In either case, notify a service person.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.
• TEST 13: CT number versus patient position
Phantom or equipment: A simple cylindrical plastic container about 20 cm in diameter (the same phantom used for Test 1).
Measurement: At least five scans of the same phantom at the same technique are performed. However, the position of the phantom in the gantry should be changed for each scan. Place the phantom near the center of the gantry (use this image as the “standard”), top, bottom, and right and left sides. Set the ROI feature available on the video monitor to about 200 to 300 mm2 (or 200 to 300 pixels) and measure the average CT number of water at the center of the phantom (not the center of the image) in each image.
Phantom or equipment: Three or four water-filled phantoms, each of different diameters (Fig. 18-21). Typical diameters are 30 cm (body), 20 cm (adult head), and 15 cm (pediatric head). Figure 18-21 also shows a very small phantom (8 cm in diameter) that models extremities.
Measurement: A scan of each phantom size at the same technique is performed. The size of the phantoms should cover the sizes of the anatomy used clinically. For each CT scan, set the CT scanner field of view just large enough to view the entire phantom. Set the ROI feature available on the video monitor to about 200 to 300 mm2 (or 200 to 300 pixels) and measure the average CT number of water at the center of each phantom image.
Expected results: The average CT number of water should always be zero, independent of the size of the phantom.
Acceptance limits: The average CT number of water should vary no more than 20 CT numbers from the smallest to the largest phantom.
Possible causes of failure: Some CT scanners have electronic circuitry that compensates for the wide range of x-ray intensities that activates the detectors. The intensity of the x-ray signal depends on the amount of tissue that the x rays penetrate before they strike the detector. Improper compensation for the number of x rays that reach the detector may cause the calibration of CT for water and other materials to shift from the ideal value. A service person is usually required to trace the problem.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.
• TEST 15: CT number versus algorithm
Phantom or equipment: A simple cylindrical plastic container about 20 cm in diameter (the same phantom used for Test 1).
Measurement: Perform a single scan of the phantom. If possible, use the same raw data to construct the image several times, each time using a different reconstruction algorithm or filter. If it is not possible to use the same data for several reconstructions, rescan the phantom using a different algorithm for each image.
Expected results: The average CT number of water should always be zero, independent of the type of algorithm used to reconstruct the image.
Acceptance limits: The average CT number should vary no more than three CT numbers from one algorithm to the next.
Possible causes of failure: Miscalibration of the algorithm. If a recalibration of the CT scanner does not remedy the problem, a service person should be notified.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.
• TEST 16: CT number versus slice width
Phantom or equipment: A simple cylindrical plastic container about 20 cm in diameter (the same phantom used for Test 1).
Measurement: A few scans of the water phantom are performed at the same technique; however, the nominal slice width is changed between each scan. The slice widths used should cover the sizes of slice widths used clinically. Set the ROI feature available on the video monitor to about 200 to 300 mm2 (or 200 to 300 pixels) and measure the average CT number of water at the center of each phantom image.
Expected results: The average CT number of water should always be zero, independent of the slice width.
Acceptance limits: The average CT number should vary no more than three CT numbers from one slice width to the next.
Measurement: A few scans of the water phantom are performed at different mAs and different slice widths, with all other parameters constant. The settings should start at the smallest mA value available and fast scans (low mAs) and increase to the highest mA value and slow scans (high mAs). Set the ROI feature available on the video monitor to about 200 to 300 mm2 (or 200 to 300 pixels) and measure the standard deviation (not the average) of the CT number of water at the center of each phantom image.
Expected results: The noise in the image is proportional to the standard deviation of the CT number measured in a homogeneous medium (water). Generally, the standard deviation of the CT numbers in the ROI (σ) should decrease as the mA values and slice width are increased, keeping all other parameters constant (Brooks & Di Chiro, 1976). At lower mA values, the dependence is σ ∞ (mA • Slice width) 1/2.
The low mA region is called the photon noise region and is statistical in nature. On a sheet of graph paper, plot the standard deviation versus (mAs × Slice width) 1/2 (Fig. 18-22). As the mA value is increased, the standard deviation will decrease; eventually the image noise will not be limited by the number of photons. At that point, the noise will become more or less constant and characteristic of the inherent electronic noise of the CT scanner.
Acceptance limits: The noise curve that was obtained when the CT scanner was new should not change appreciably with age. Be especially sensitive to increased standard deviation as the CT scanner ages in the high mA portion of the curve, in which the noise is dominated by electronic components.
Possible causes of failure: Anything that can cause the noise of the system to change, such as changed sensitivity of the detectors, increased noise in the detector amplification circuits, or reduced photon output per mA. Notify the service person.
Acceptance limits: None.
Possible causes of failure: If the exposure rate is exceedingly high (>25 milliroentgens per scan), there may be a problem with the collimation system or the x-ray tube shielding. In that case, notify the service person.
Frequency: This should be performed at the time of installation as part of the acceptance test and annually thereafter.