Week 1
I can’t believe it’s only been a week since I first came here. I have done so much during the week that getting the ID card seems like a month ago. I met with my clinical mentor Dr. Chiang from Radiology in the neuro reading room on Tuesday and got the opportunity to observe radiologists analyzing scans. In the neuro reading room, radiologists primarily work with CT, MRI and sometimes PET scans of the brain and the spine. The first thing that I noticed was that every doctor has at least three screens in front of them. The usual settings are one for displaying scans, one for recording findings and the other for viewing files and communication. The monitors in the reading room are medical grade, which display more colors and maintain a stable luminance so that images are displayed with greater accuracy and consistency. I learned that nowadays all images are digitized and displayed on such monitors and films are already outdated in the U.S. After images are acquired, they are uploaded into the system and linked to individual patients and radiologists can easily access them and all information needed such as time of the scan and all imaging parameters without leaving the room. If a patient has scans from another hospital, they can bring a disk with previous scans. It’s amazing how technology really makes the process simpler and better. Another technology advancement that greatly helps radiologists and other doctors is Fluency for Imaging. It is an AI powered clinical documentation and workflow management system. When reading a scan, the radiologists can select built-in templates or create their own templates for a certain sequence, which usually include all common unremarkable findings that suggest nothing’s wrong. If the radiologist finds anything suspicious they only need to add the finding or replace a few sentences. The speech recognition feature works really well too. There seem to be a higher accuracy of recognizing medical terms than some common terms. I wonder if there’s a preloaded database of medical terms that is given more weighting in the recognition. The system does freeze or crash from time to time so maybe it needs a hardware upgrade. Overall, I’m very impressed by the technologies and I hope the same technologies become available in other places in the world. A family member of mine in China got sick and got scans last year. The local hospital are still using films and the radiologists type the report. These new technologies will definitely increase efficiency and outcome a lot if made available. With limited prior knowledge in anatomy and diseases, I didn’t understand the diagnostic process at first. The radiologists kindly introduced basic anatomy and showed me several commonly seen abnormalities such as fractures, edema, and hemorrhage. I understood more and more about the diagnosis as I saw more cases. It’s very exciting to see effects and artifacts I’ve learned in classes and textbooks and really understand how much they affect the diagnostics. As an engineering student, when I first see a medical image, I immediately think about signal-to-noise ratio, matrix and file size, artifacts and resolution. I realized that I often forget to see beyond the images and try to understand what the patient’s going through on my first day in the reading room. The diagnostic process almost always starts with medical history and the final diagnosis can rely on various factors including age, gender and symptoms. One doctor mentioned that common sites of intracranial diseases are different in western and eastern populations and the image often does not include the that common site in western population so maybe image acquisition should also be personalized. There were a lot of other interesting cases and I would really like to tell about all of them but I don’t want to make this post too lengthy. I’m looking forward to seeing more next week and share the most interesting ones!
Comments
Post a Comment