HTML version prepared with the assistance of D. J. Wagenaar of Joint Program in Nuclear Medicine
Copyright, 1995, Robert E. Zimmerman, Boston MA.
The meeting this year was scheduled one day earlier in the week. It began on a Monday instead of the usual Tuesday. This presumably permitted more attendees to get better airfares by arriving on Saturday for the Sunday Categorical Seminars. Being my usual masochistic self I arrived on Friday evening to be fully alert for Saturday's committee meetings.
The convention facilities were fairly good: Plenty of space for technical and scientific exhibits, but some of the meeting rooms were extremely crowded for popular sessions. The hotel facilities were stretched to the limit in Minneapolis. Many, many complaints were heard. In fact my own room reservation, which had been booked and confirmed in April was CANCELED on the afternoon of my arrival. The hotel GRACIOUSLY allowed me to stay the first night and on a day by day basis through the rest of the meeting. VERY STRANGE.
The weather started off cool and rainy on Saturday and Sunday but cleared for Monday's opening and warmed up throughout the week until by the end of the meeting the temperature was in the 90s. By week's end (Sunday 18 June) the temperature hit the 100s. Nevertheless less the city is a very pleasant place to visit this time of year. Wonderful lakes, beaches and bicycle trails are right in town.
I recently became a member of the executive committee to advance ideas our own program has concerning resident training. See the discussion concerning our "passport" at the end of this report. It looks as if a major project of the council will be related to training guidelines, task lists for residents, etc. Devices of this nature seem more necessary as the traditional nuclear medicine resident has changed so much in recent years.
There is much discussion in this Council about the changing nature of resident applicants, lack of nuclear medicine jobs, radiology residents, etc.
Next mid-winter meeting will be in Puerto Rico in early Jan. We were warned that room costs will be about $200/nite. Better get out the AAA guide book to find cheaper rooms. SNM will be meeting with ACNP. The scientific program will probably be about FDG and SPECT.
It looks as if the next categorical seminar to be sponsored (sort of) by the C&I will probably be on "filmless nuclear medicine". Details to be determined.
A lengthy discussion took place about the possibility of an electronic peer reviewed journal. The JNM has not been receptive to this idea so far. What seems to be needed is for an enterprising author to post separate cine files related to his paper on the Web and include the /http in the paper. Force the editor's hand so to speak. Any takers?
NEMA handed out a flyer announcing: Supplement 7 The Nuclear Medicine Component of the DICOM Standard
A new nuclear medicine Image Object Definition has been defined. It appears to be in the approval process at this time. Maybe it will be completed and accepted soon.
ADAC, however, sponsored a "Physicists breakfast" on early Tues. AM. This more than made up for missing their users' meeting. They described the attenuation correction system they have developed and are selling on their analog cameras. They are performing clinical trials on the version for the digital camera now and anticipate offering it on the new cameras soon. They are using the technique of Hutton, Bailey, et al of Sydney originally described for a single head camera. A moving, collimated line source opposite one of the heads scans across the patient. The camera head dedicates a corresponding sliding region of the crystal (about 10% of the crystal area) to receiving the transmission counts. The line sources slide at each projection angle. This method seems to have low cross-scatter contamination and no truncation artifacts while achieving the very desirable goal of SIMULTANEOUS emission transmission imaging.. It does require two line sources, however. It was suggested to me a few weeks before the meeting that a very natural extension of this business with two expensive, expiring line sources is to mount a x-ray tube on the gantry and get a quick transmission scan that way. You will recall that is how all bone mineral scans are now performed. The group at UCSF under Bruce Hasegawa (See No. 62 below) has been studing this very method for several years. Perhaps the instrument manufacturers should seriously study this for routine application in nuclear medicine.
ADAC also had Gerd Muehllenher present at the breakfast to describe results with digital SPECT cameras operating in coincidence mode. The key idea here is to create clusters in the opposing detectors that can be in coincidence. Because the clusters are independent of each other the count rate can go up markedly when compared to an analog Anger camera. Impressive resolution under 4 mm with true count rate of 5 kcps, I think. This may prove to be worth a hard look.
The idea of a physicist's breakfast is nice and it worked well at the RSNA but I was upset that I missed a good session on SPECT instrumentation as this breakfast meeting took from 7:30 am to 9:30 am. Start earlier? Find a better time? It is a very busy week.
Dr. Frederick J. Bonte presented the Radiology Centennial Hartman Lecture. Dr. Bonte is currently Director of Nuclear Medicine at Texas Southwestern Medical Center. He briefly reviewed the discovery of X-rays by Roentgen and the beginning of nuclear medicine with the discoveries of Rutherford and the Curies, the transit time measurements of Blumgart, the cyclotrons of Lawrence and the flowering of the tracer principle in the late 1940s. He ended with two predictions and a caution. The understanding of the genome will lead to greater use of nuclear medicine in genetic disease and the mysterious functions of the brain will be mapped. The caution was that misperceptions of the dangers of radiation will lead to misplaced restrictions on the advances that can be realized.
The authors investigated the quantification of the wall thickening effect using methods based on maximum value, FWHM measure, square wave model and a hybrid (counts and FWHM). They found that by far the most reliable method was their technique. Dr. Garcia pointed out in the question period that a 3D method, rather than the 2D method used, would be more appropriate.
Using the Siemens 3 head system with an offset fan collimator and a line source for the transmission scan and Tc-99m MIBI it was found that the counts in the inferior wall were increased 25-30%, more than could be expected by attenuation correction alone. The authors postulate scattered counts from the liver were also increased by the attenuation correction procedure. Considerable re-education of the reader is required. Scatter correction may be necessary before this technique can be fully accepted in routine practice.
Picker's system was used in this study of attenuation correction. The authors used Tl-201 in the heart. They call attention also the unexpected increase in the inferior wall counts presumably from the scatter from the liver activity. The effect was worse in rest studies. They also noticed that the apex had fewer counts than expected. Others have noticed this, also.
Barber described further advances in the semiconductor readout technology for the ultra-high spatial resolution brain imager being built at the U of Arizona by Harry Barrett's group. The successful test with a readily available Ge array and multiplexer is one step along the way of the development of a CdZnTe detector array and multiplexer. The immediately preceding paper No. 60 described high speed dynamic imaging using the current clinical version of the brain imager. This project is one of the most exciting in nuclear medicine imaging technology today and should be followed carefully by all of us. See Nos. 508 and 1208.
Kalki described further results that this group has obtained using their animal system to perform quantitative imaging with their Ge semiconductor camera. Simultaneous transmission scans were used to correct pig myocardial perfusion scans for attenuation. This long-term project now seems to be in the mainstream of nuclear medicine.
The authors remind us that F-18 imaging with high-energy collimators on standard SPECT systems are severely compromised when compared to PET imaging. Septal penetration will be a dominant influence on quantitative performance. By inference we would expect that lesion delectability will also be severely compromised when compared to true coincidence PET systems. It will be important to define the clinical task very precisely to insure that the proper PET system is employed. More studies are definitely in order.
Using four point sources and an Hoffman brain phantom and a modified 3D filtered backprojection reconstruction algorithm the authors were able to successfully correct for motion during acquisition. The method is more successful with multiple head cameras.
This could be called the low tech approach to attenuation correction. Originally described last year, this paper presents results on 20 patients. This technique is used with rest Tl/stress MIBI. Briefly, Tl-201 is in the heart from rest injection, Tc-99m MAA is in the lungs, and a Tc-99m impregnated elastic body wrap is used in a dual isotope study to get the Tl-201 data, the lung positions and the body contour. Iterative reconstruction is performed to get the attenuation corrected images. Authors claim good accuracy. It works and is inexpensive but how do you market it?
Asymmetric-fan (AsF) collimators allow one to collect all the transmission data (not truncated). These authors described their sequential scan method to get the transmission and emission data. Phantom and patient studies were successful. Of note, the authors described depressed apex counts in both the phantom and patient studies when attenuation correction was performed. This was stated in several places at this meeting. In fact there was a sub-theme of sorts in many quarters that apex and inferior wall counts were so different in patient images that substantial care and further investigation are required before attenuation correction in perfusion studies can be thought to be routine. See also Nos. 442, 768, 769
Five females over 200 lb. were the subjects. All had Rb-82 PET myocardial perfusion scan with a transmission scan. This data was used to synthesize a Tl-201 myocardial perfusion scan and attenuation map. A one iteration Chang correction was compared to a 30 iteration ML-EM reconstruction. ML-EM consistently gave lower error and fewer artifacts than Chang with one iteration. Their conclusion was that Chang correction with one iteration is not good enough, even when attenuation map is known.
In phantom experiments with Tl-201 and Tc-99m and varying amounts of background, cardiac lesion size estimates were better with attenuation correction.
In a Monte-Carlo phantom experiment the authors determined that use of opposing projections in situations where truncation occurs can improve reconstruction quality when ML-EM reconstruction is performed compared to FBP reconstruction. This was true even though the opposing views may be inconsistent because of the truncation problem. Taking account of depth-dependent-blur and linearly weighting the overlap of the conjugate views seemed to produce fewer artifacts.
In yet another phantom experiment involving breasts and no breasts on a torso phantom with single head camera and planar transmission source the authors convinced themselves that quantitative accuracy was superior when attenuation was corrected.
This seems to be a description of GE technique for the Optima (L shaped head). Sequential scanning was performed.
The authors showed that it was possible to have degradation of spatial resolution while performing attenuation measurements but their simulation was not able to predict the pile up errors well enough to characterize the errors quantitatively.
I really did not understand this paper.
The authors studied (with Monte Carlo tools) the effect of Tc-99m contamination on the acquisition of a transmission scan using Gd-153. Sequential, rather than simultaneous scanning was their focus. Line source strength of 100-150 mCi is required to keep error in attenuation below 10%
This was an interesting attempt to obtain an attenuation corrected heart image using emission data only. ML-EM algorithm was used. Encouraging results were claimed. My naive brain would say there is not enough information in the data to sort it all out. I am often wrong.
A Monte Carlo study. Tl-201 is the isotope. One, two and three head camera systems were simulated. The conclusion seemed to be that for one head use 180, for two heads 180 and 360 perform about the same and for a three head system use 360.
The authors point out that when attenuation AND scatter corrections are performed on 180 and 360 degree acquisitions there is little difference in contrast performance. This is another of the papers that show the importance of scatter correction as well as attenuation correction.
Nine real volunteers with Tl-201 myocardial perfusion scans, and a Tc-99m point source were subjected to axial motion. Using cross-correlation of the point source good motion correction was obtained. There was trouble with rotations, however.
The authors looked at the beating motion of the heart and how it affected the perfusion images. A Monte Carlo model of the heart emissions was constructed. No heart rotation was built into the model, just size changes with shortening effects were included. Perfusion defects were noted especially at the apex. Should we correct for attenuation, scatter AND heart and patient motion?
Should you vary the speed of rotation while acquiring a perfusion study? This paper suggests there is some small gain to be made if you dwell longer at the good parts and speed up over the bad parts. Sort of a modified 180 degree collection?
Three points can to be made concerning this excellent paper. One, do not forget to correct for the lead x-rays from Tc-99m on the collimator that contribute 10-20% of the Tl-201 counts. Two, characterize the shape of the blur function for Tc-99m scatter and for lead x-rays in Tl-201 window and apply filter before subtracting. Three, limit the amount of Tc-99m to no more than about 8 mCi or noise will get you. Simultaneous Tl-201 and Tc-99m perfusion imaging should work very well if these recommendations are followed.
This was the strongest and most direct evidence that scatter compensation is required to get the activity estimates correct in myocardial perfusion imaging. Contribution from the liver and stomach can really mess up the relative perfusion in the walls and apex. This was a simulation study of Tl-201 perfusion in a heart with torso.
List mode collection of energy and position is performed. By using the differential absorption of the two In-111 photopeaks attenuation data is obtained. Scatter information is obtained by spectral fitting methods. Performance is good enough to determine source depth to 1-2.5 cm accuracy. SPECT performance is limited because of noise.
The room was overflowing for this session and I was not able to get in. Gerd probably described what he did at the ADAC Physicist's breakfast Tuesday morning (see above). It really appears to be a good idea and clever use of a digital camera. Local cluster in coincidence with opposing local cluster. There can be simultaneous coincidence between local clusters. This has the potential to be dramatically better than the Anger NaI PET cameras of the 1960s and 1970s. But not much better, in principle, than the PCII from Gorden Brownell and Charley Burnham. The new camera may have better spatial resolution but will have worse count rate characteristics, I would expect.
The authors describe a new method of acceptance testing of fan-beam collimators. Multiple line sources are placed in known positions along the axial dimension of the fan. A static image is acquired and the projection of each line source onto the crystal is determined. An ensemble of sampling rays from the source position to the projected position is determined. A measure of the quality of the focus line is the points of intersection of the rays. This sounds confusing but if it works well it would solve a need that is increasingly important.
This is a technique that performs longitudinal tomgraphy or multiple pinhole tomography in one dimension. It seemed to have poor depth resolution. It did seem to be quite versatile and flexible, but I doubt it is applicable to real problems. See No. 770 in poster session.
This is a high count rate probe for counting of annihilation radiation in a non-coincidence mode. The scintillator LSO meets these needs. Count rate with 8% loss of 3 Mcps was achieved.
A thin plastic scintillator is coupled to a bundle of 19 hexagonally packed optical fibers. A multianode PMT with Anger logic gives position information. System resolution is about 1 mm. The FOV is about 1 cm.
Two cameras were described. Both used plastic scintillators. One used a multianode PMT for position readout while the other used a flexible fiber optic imaging bundle. Good sensitivities were reported.
If depth of interaction can be calculated then thicker crystals may achieve spatial resolutions as good as thin crystals. There are several reasons why it may not work in a practical way. At least some theory and optimism were shown here. We await concrete results.
Theoretical bounds contribute to an objective assessment of imaging systems without the need for simulations. The Cramer-Rao bound and the Barankin bound are two such bounds. The Barankin bound has been shown to be a better predictor in high noise situations. In this paper the validity of both bounds is tested in a medical imaging task: prediction of parameter variances in the ML-estimation of activity concentration and size of a Gaussian sphere in the presence of Gaussian white noise. The authors show that the Cramer-Rao bound fails as noise increases, the Barankin bound is valid in an intermediate range of noise but it too fails to predict parameter variance at very high noise conditions but it does predict performance where the Cramer-Rao bound fails.
Estimation of activity within an ROI in SPECT is a common problem in nuclear medicine. Resolution and noise limit the accuracy and precision of the estimate, however. Huseman showed in 1984 that the activity in an ROI could be computed entirely with the projections. These authors extend this work and show that resolution recovery performed on the projections can significantly improve the accuracy and precision of the activity estimate.
The ML estimator is useful for unbiased, efficient parameter estimation for quantitation tasks in nuclear medicine. However in high noise situations the ML estimator tends to have problems, i.e., parameter variance increases beyond the expected theoretical limit. It is important to know where this regime begins. This regime was investigated and characterized by the authors.
A rather general method of quantitation was described to perform compartmental analysis on renal dynamic scans. The total injected dose is assumed to be in blood, kidney and bladder in each frame of the study. Solving 90 simultaneous equations allows the determination of the scale factors for each organ's contribution at any given time. The corrected time activity curves are then used by compartmental modeling software to solve for physiological parameters. All data is from the image. This general formulation can be applied to other organ systems.
In conjugate view counting the thickness of the source organ is often ignored, leading to some error in activity quantitation. The authors investigated the magnitude of this error for I-131 and In-111 quantitation. They found that scatter subtraction is very important when applying source thickness corrections. For scatter correction they used the method of Ogawa with triple energy window corrections for I-131 and quadruple energy window corrections for In-111.
The author pointed out the convenient of programming and processing on user-friendly PCs for such user-unfriendly systems such as GE's Starcam computers. These clinical computers are often unavailable at convenient times, lacking good debuggers, etc. However object oriented programming techniques with computer networks can bring programming and processing to the desktop.
The goal was to optimize spectral resolution by varying the size of detector elements, knowing the parameters of charge production and transport within the detector material (CdZnTe). The optimum size seems to be about 400 u for 1.5 mm thick detector elements.
They described an image manipulation package based on IDL and home grown modules. The goal seems to be dose calculations for internal emitters, given input from CT/MRI/PET/SPECT images.
A fuzzy match of sequential images of monoclonal antibody images is performed to aid quantitative comparison. The fuzzy match uses image row maxima in one image compared to regional sum of square differences in comparison image. Displacement in x, y and rotation amount is determined. Visually accurate registration was achieved in 97% of 2920 limb image pairs from 292 patients.
The authors successfully extracted compartmental model parameters on a new presynaptic dopamine transporter radiotracer in 4 subjects. SPECT scans were obtained every 5 minutes for 150 minutes and blood samples were obtained over this period. I would hope that these types of measurements could be made routinely in nuclear medicine in the future.
This team of hospital, university and industrial scientists found that principal component analysis provided better cine images and compressed the data significantly in a study of 50 consecutive patients.
I found the posters a bit more disorganized than usual this year. For some reason many were pulled out of numerical order for "walking poster sessions", which made my own private roaming sessions more disorganized than usual. There was ample space (too spread out?) for the posters. The few I managed to see are outlined below.
The authors were interested in crossover corrections for simultaneous Tl-201/Tc-99m myocardial perfusion imaging. They used 3 energy windows and found that substantial variation in correction factors was required for the various projection angles. This was in contrast to the results presented in paper No. 241 by Moore et al where correction factors were found to be nearly independent of projection angle.
An uncollimated flood source used for transmission scanning leads to substantial problems from scattered radiation in the patient. The authors found that by correcting the images for scatter using the TEW method transmission measurements equivalent to fan-beam collimators and line sources could be obtained.
It is finished and it works. Count rates to 20 kcps, energy resolution of 12%, spatial resolution of 6.2 mm and volume sensitivity of 0.6 kcps/uCi/cm. Congratulations!
Cortical regions cleared before subcortical regions or the cerebellum. This has important clinical ramifications.
Dual isotope, single isotope (DSIA) with 140 kev and 511 kev photons can be done with special parallel hole and fan beam collimators. With parallel hole collimator 30% of counts are from penetration and with fans it is 20%. Resolution at 511 kev is 14.5 mm for parallel hole and 7.9 mm for fans. They anticipate useful dual isotope brain and cardiac imaging will be possible.
In a very humorous but educational poster Larry Zeng made some useful points about F-18 imaging on conventional SPECT systems. He found that even with a special 511 kev collimator corrections were needed for poor collimator performance. ART reconstructions were nearly equivalent to the more time consuming EM reconstructions.
SPRINT and ASP are stationary ring systems that use one dimensional collimators. This poster showed how one dimensional collimators could be constructed using lead foil with 2% antimony bonded to rigid polymethacrylimide foam. Good uniformity of response could be achieved.
TEW was shown to help cleanup crossover of Tc-99m into Tl-201. No noise analysis or limits to the amount of Tc-99m that could be used were presented. The effect of the lead x-rays was not discussed. See No. 241 and No. 752 for further information on Tl/Tc simultaneous imaging.
Some fan-beam systems truncate the transmission projections. This will probably reduce the accuracy of the attenuation map. It has been previously shown that this is a small effect if on is interested only in the effect on a central region (like the heart). These authors investigated inclusion of another piece of information into the reconstruction and attenuation correction process. Namely the inclusion of the exact body contour. As I understand it this should be known because in any proper set up the detectors will closely hug the patient thus providing the exact contour information. They found that when this information is included the ROC curves of observer performance for defect detection were better. The images were more consistent. Since I have noticed that the orbits do not always hug the body tightly I wonder how relevant this information is, in practice.
The authors of this poster compared the Picker STEP system to parallel hole perfusion imaging to N-13-NH3 PET. They found that in 11 ROI segments the standard scans correlated poorly with the PET (r=-0.0014) while the STEP scans correlated better (r=0.44). The apex was of reduced intensity compared to PET. Still room for improvement, eh?
This group has been advocating a sequential transmission scan after the emission scan is completed. Here they were investigating just how short the transmission scan could be and still have adequate statistics. With a 3 MBq (81 mCi) Tc-99m line source they found that a 2 min. scan was sufficient giving about 10% average error in the heart region. A 10 min. scan gave 5% uncertainty in the heart region.
This is an elaboration of the poster above. Or is it the other way around? Anyway, the authors assert that all detectors are fully available for the emission scan, there is never any crosstalk into the emission scan, noise in the transmission data contributes minimally to the final image noise and transmission maps are not optimized anyway. What they do not say is that if there it patient movement the whole procedure may be wiped out.
See No. 443, I think they are related. This is seven pinhole tomography updated. Dedicated computer, better correction maps, full energy spectrum collection but the basic limitation of seven pinhole collimation remains: poor depth resolution.
The author advocates the use of non-uniform transmission sources -- less activity at the edges of the patient. This would reduce the high count rates at the edges where there is high transmission. Lower noise transmission images would be the result.
The proposal is to use two detectors adjacent to each other to avoid truncation artifacts in the transmission data. Field of view at the patient could be up to 23.5 cm radius.
If you assume that the crosstalk effect can be represented as an
object-dependent addition to the direct image and therefore be represented
as an image degrading factor a "crosstalk deblurring" filter can be
derived. The authors demonstrated lower %RMS error, lower noise and
better contrast in the Weiner filtered images of synthesized heart images
based on segmented MRI images.
The authors computed the scatter for 511 SPECT, 140 kev SPECT and PET.
They conclude that they are very different, there is information in the
scattered background deconvolution methods may be able to recover some of
the information. Well known but this serves as a refresher.
BTA McKee, MJ Chamberlain, RB Jammal, Ottawa
The authors computed the scatter for 511 SPECT, 140 kev SPECT and PET. They conclude that they are very different, there is information in the scattered background deconvolution methods may be able to recover some of the information. Well known but this serves as a refresher.
There was also an "Internet Tutorial" organized by Jim Halama for SNM attendees which seemed to be well attended.
LUNIS will soon be on the WWW. Visitors to this booth were able to get a preview of how it works. You will be impressed.
Michael Tobin again had an impressive multimedia computer exhibit showing how home computers can be used as very effective teaching tools. This year the Amiga demonstrated WWW browser capabilities.
The usefulness of a local and regional network as a web server was graphically demonstrated to all who spent a few minutes at the booth.
The exhibit won an honorable mention. There was some neat software here to view and process NM images. An example of a low cost viewing station.
This exhibit won a third prize. I found this very educational even if it was not on the Web. The program STELLA was being exhibited. It is an interesting program for compartmental modeling. I wished I had more time to learn from it.
This was a very informative exhibit describing the machine under development by Harry Barrett's group. FASTSPECT is a prototype brain imager that uses multiple pinholes that are stationary. Nothing moves, in fact. Images were available over the Web. It was one of those exhibits you wanted to explore for an extended period of time. I suppose it is still available on the Web. Have to go surfin'.
Ever wonder what the residents actually do besides read films while they are in the clinic? We did and we invented a "task list" in the form of a small booklet we called a passport. Each of our hospitals has a "visa" that contained the task list appropriate for that nuclear medicine unit. As the resident completed tasks the clinic director or his designee would sign the visa.
To make administration easier we invented an electronic form of the passport and put it on the Countway Library of Medicine web server. It is accessible from all the hospitals, has security features built in and cannot be lost, folded, spindled or mutilated.
This exhibit featured "live" access to the passport demonstrating security features and content.
I was informed by Siemens that they are getting crystals from Russia now. It will be good to break the French Connection in crystals.