Homeless Scholar Blog ~ DIAGNOSTIC RADIATION REVISITED

“X-rays…can produce profound effects upon the vital processes. We know the fluids of the body are passing in and out through the walls of the blood vessels, therefore, some of the ions produced by the actions of X-light and radioactive substances will be constantly carried by the currents through the cells of the walls of the vessels, and their destructive action will there be marked.” – William Herbert Rollins, “Notes on X-Light,” 1905

Ionizing radiation can cause cancer, but what is the safety limit? At what point does it become something to be concerned about? Do we even know with any exactitude how safe X-rays are?

The Fukushima disaster reminded the world of the hazards of nuclear power and radiation in general. Other news stories in recent years have served to make the public more aware of the risks of medical radiation as well. A glaring example is that of the massive overdoses of radiation to over 200 neurology patients at the Cedars-Sinai Medical Center in Los Angeles in 2009. This was followed by a federal class-action lawsuit against GE Healthcare who made the scanners, which allegedly lacked adequate safety features. In another California hospital, a toddler received 151 consecutive CT head scans. In general, Americans have been receiving increasing amounts of diagnostic radiation, much of it considered unnecessary by some medical professionals.

Ionizing radiation is classified as a carcinogen by the FDA, and children are the most vulnerable, due to their size and lengthy remaining life span. (However, it has been shown that the risk in middle age may be twice as high as previously estimated. A later study has estimated that the 4 million annual pediatric CT scans nationally will cause 4,870 cancers in the future. The researchers claim that over 40% of these cancers could be prevented by reducing the highest 25% of doses to the median.

There is a surprising amount of ignorance among medical personnel concerning CT doses. Yale radiologist Howard Forman and colleagues assessed knowledge of radiation risk among patients getting CTs as well as physicians who ordered and read them. Questions about informed consent were also asked. In Forman’s words, “Patients are not given information about the risks, benefits, and radiation dose for a CT scan. Patients, ED physicians, and radiologists alike are unable to provide accurate estimates of CT doses regardless of their experience level.” This research was done back in 2004. In May of 2017, Anken Gupta and N.S. Bajaj, writing in the Journal of Nuclear Cardiology, stated that a typical patient undergoing  cardiac perfusion (nuclear medicine) imaging in the U.S. receives a 20% higher radiation dose compared to other countries studied by the IAEA (International Atomic Energy Agency). A partial update of Forman’s work in 2016 showed that ~ 45% of emergency department physicians and nurse practitioners surveyed “could not correctly identify which of six common imaging modalities used ionizing radiation.” Also, half the attending and three quarters of the residents were uncomfortable explaining the amount of radiation in imaging tests to patients. While this newer, albeit limited, information is good to know, a more comprehensive survey with a higher response rate is clearly need to get a better idea as to whether progress in health care professional education in this area has been made.

Moreover, to make matters worse, a patient’s dose will double if a constrast injection is given for the examination. Often, additional (“multiphase”) CT scans are done which are completely unnecessary. This overscanning can involve, for example, before and after constrast or delayed imaging. A 2011 study found that many patients were receiving unindicated, extra phases with at least double the standard radiation dose, but with no clinical benefit associated with the procedures.

Units of measurement for radiation can be confusing because many nuclear physicists never converted to the metric system. Sieverts and Grays refer to very large amounts of radiation. Generally, in radiology journals, you will see the unit “mSv” for milliSievert or “mGy” for milliGray. The latter refers to absorption by a given tissue, the former, to the biological impact of the absorption. Sometimes, the terms “rems” and “rads” (expressing the same information as Sievert and Grays) are still used. A typical chest x-ray is only 0.02 to 0.1 mSv, whereas a CT scan of the abdomen is usually 8-10 mSv. (Yet, even today, you will hear some medical personnel claim that a CT scan is “like getting a few chest X-rays”. I have heard this myself.)

Unlike in Europe, in the U.S., typically little is communicated to patients about the radiation doses they are about to receive. The emphasis here is on the possible (but rare) adverse effects of the contrast injection. (When I expressed concern about the radiation dose of a cardiac perfusion test I was about to receive, I was told by the physician, “It just goes right through you.” Not really: it delivered 28 mCi (milliCuries) of ionizing radiation, which is approximately 11 mSv.) Brochures could be made available to patients in addition to spoken comments by the doctor (but of course, this assumes that the latter knows the real risk to begin with!) “Informed consent for radiological examination,” writes cardiologist Eugenio Picano, “is often not sought, and even when it is, patients are often not fully informed, even for considerable levels of radiation exposure and long-term risk.”

One hundred millisieverts (10 rem) is generally considered the safe limit of low-level radiation. The dominant theory in radiology is that of Linear No Threshold (LNT), which posits that risk rises proportionally with dosage and that there is no safe dose. This theory is controversial. While it is accepted, for example, by the U.S. National Research Council and the Environmental Protection Agency, it is rejected by the Health Physics Society and the French Academy of Sciences. In addition to a lifetime of 100 mSv, the HPS recommends against more than 50 mSv in one year (above what would normally be received from the environment, so-called “background radiation”). German researchers who have studied DNA repair in response to radiation recently concluded that “extrapolating risk linearly from high dose as done with the LNT could lead to overestimation of cancer risk at low doses.” Evidence for nonlinearity in LNT, stating that the evidence against it is inconsistent and that it is a matter of good patient care for the model to be retained…In practice, the general principle is that doses should be “as low as reasonable or achievable (ALARA), however, as pointed out by radiologist Rebecca Smith-Bindman of UC San Francisco, “there are no guidelines to indicate what doses are reasonable or achievable for most types of CT.”

In addition, there is the problem of excessive physical scanning per se. In a study of multiphase scanning and unnecessary exposure, Guite et al (2011) write that “a large proportion of patients undergoing abdominal and pelvic CT scanning receive unindicated additional phases that add substantial excess radiation dose with no associated clinical benefits.” [emphasis added]

An effort is being made to track the history of patient radiation exposure by means of an electronic card. Usually, this information is unknown, and some patients have already received an unacceptably high amount of radiation from previous diagnostic scans. With this piece of plastic, which resembles an ATM card, relevant dosage information can be routinely recorded and accessed in order to avoid pushing patients into an unsafe zone with additional exposure. If dosage is seen from the card to be high, alternative tests can be done. This so-called Smart Card project was developed several years ago by the IAEA.

So, what, then, is the safety limit of human X-ray exposure? Well, we know, as already stated, that ionizing radiation is a carcinogen, and that there is a general professional consensus that ~ 100 mSv puts one in a higher risk category, but for amounts below that, “low-level” radiation, the picture is unclear. And whether sporadic media reports about this over the past few years have had any effect is a matter of speculation. Some companies have provided dosage reduction software, and there are the Image Gently and Image Wisely campaigns to reduce exposure in children and adults respectively. But there is still too much ignorance about the risk, which is more serious for children (as well as for small, relatively young adults). And there is insufficient attention being given to multiphase exams, many of which are unnecessary. Finally, there is the matter of communicating the risk. In Europe, this is done clearly on complete consent forms. In the U.S., the patients are generally not given any idea of the large amount of radiation they will receive from CT scans or nuclear medicine procedures.

Patients should ASK if the test is really necessary. Sometimes it can be replaced by a safer altnernative, such as MRI or even ultrasound (Martin & Semelka, 2006).

Until the theoretical controversy about lower doses has been resolved, it makes sense to be cautious about exposure. The public, and health care personnel as well, need to be made more aware of the risks to the patient, with completely informed consent, including specific dosage, and with clear explanations. (Again, brochures can be made available to supplement this.) Dose reduction does occur, but it needs to be more widespread. Also, more research needs to go into individual susceptibility to radiation and how best to test for it. This last point was made back in 1905 by the Boston dentist and radiographer quoted at the beginning of the article, William Rollins; and, with the exception of children (and if they’re lucky, small adults), it is still one largely ignored by the American health care system.

~ Wm. Doe, PhD – September, 2017

SELECTED REFERENCES

William H. Rollins (1905). Notes on X-Light. Boston Medical and Surgical Journal, 153 (Aug 10): 155-7.

I. Shuryak & D.J. Brenner (2010). Cancer risk after radiation exposure in middle age. J Natl Cancer Inst; Nov.; 102(21): 1628-35.

D. L. Miglioretti et al (2013). The use of CT in pediatrics and associated radiation exposure. JAMA Pediatrics. Aug. 1. 167(8): 700-7.

D.J. Shah et al (2012). Radiation-induced cancer: A modern view. Br J Radiation. Dec; 85(1020): 3116-73.

Martin, D.R. & R.C. Semelka (2006, May 27). Health effects of ionising radiation from diagnostic CT. Lancet, 367(9524): 1712-14.

K.M Guite & J.L. Hinshaw et al (2011). Ionizing radiation in abdominal CT: Unindicated multiphase scans are an important source of medically unnecessary exposure. J Am Coll Radiol., Nov; 8(11).

 

 

 

Advertisements

Homeless Scholar Blog: FORENSIC PATHOLOGY

“About 80% of murdered persons are killed by members of their own family or by close acquaintances. Husbands kill wives; wives kill husbands; both kill their children; and children murder their parents. Consequently, the deliberately planned extermination is not too common, but an investigation can never be certain unless every possible factor has been scrutinized meticulously.” – LeMoyne Snyder, Homicide Investigation

 

The term, “forensics” has traditionally denoted matters “of, or relating to, courts,” but more recently, it has come to be synonymous with “forensic science”. (How scientific it always is has been questioned. (A 2009 New York Post observes that it has “become an umbrella term that encompasses disciplines of skill rather than real science.”) As a (putative) science, it encompasses many sub-fields, from dentistry to psychology to the study of plants and insects. This brief article will only address forensic pathology, the application of medicine to crime; that is, “the problems related to unnatural death and various types of trauma to the living.” (Eckert, 1997)

A classic work in this field is pathologist LeMoyne Snyder’s Homicide Investigation. Long out of print, it has been “superceded” by a more comprehensive and even gorier handbook by former Bronx, NY police official, Vernon Geberth, Practical Homicide Investigation, of which one waggish reviewer wrote, “Don’t leave your copy lying around: a cop might steal it.” Both books cover topics basic to the field of forensics, such as estimating time of death, examination of blood stains, identification of dead bodies, homicide due to stabbings and gunshot wounds, the presentation and transportataion of firearms evidence, deaths due to asphyxiation and drowning, examination of buried bodies, deaths due to poisoning, examination for suspected sexual assault, and so on.

Popular views of forensic science are reflected in the so-called “CSI (Crime Scene Investigation) Effect,” which refers to what is thought by some to be an increased demand by jurors for courtroom evidence due to their exposure to crime investigation shows on TV, the end result being an increase in the number of acquittals. The CSI show is full of unrealistic material. In fact, one forensic scientist, Thomas Muriello, claims that ~ 40% of the techniques depicted on the show don’t even exist. Moreover, elements of uncertainty are ignored in the drama, creating the illusion of lab results being absolutely valid and reliable. Whether the CSI effect actually exists is a matter of controversy.

An article from Rasmussen College lists seven crime show myths: Detectives analyze the evidence; crime scenes are processed quickly; crime scene working conditions are consistent; crime scene professionals turn off emotions when they’re on the job; all crime scenes are processed for evidence; every day is exciting when you’re a crime scene technician of police officer: a criminal justice degree will automatically qualify you to be a police officer, CST or forensic scientist.

Was it homicide or suicide? These can be confused with each other as well (as with an accident). For example, Snyder notes that instant death from gunshot wounds is rare. “Even after bullet wounds penetrate the heart, a person may do extraordinary things, [so] it is always hazardous to conclude that a person could not have done some rational act after receiving gunshot wounds in vital organs.” Another error can involve hesitation shots. An investigator may think of suicide when only one bullet is recovered, and a murderous gunfight when there are several. However, sometimes the victim pulls the gun away reflexively and more than once, thus causing bullets to entirely miss him and creating the erroneous impression of a homicide. Geberth (p. 853) presents four different death scenarios as case histories of investigational ambiguity. “In two of the events, the perpetrators involved staged the scene to make the death appear to be a suicide. In both instances, due to a faulty police investigation, the offenders were not charged with murder. The other cases I present are classic equivocal death situations in which different medical and investigative opinions are rendered as to the manner of death of the victim. This type of equivocal death case should have been ruled ‘undermined’ because neither party could demonstrate with any degree of medical certainty that the death was in fact a homicide or a suicide.”

For those into microbiology, one of the most interesting aspects of a homicide crime scene is the protocol for protection from airborne pathogens and other biohazards. Antiputrefaction masks and protective clothing may need to be worn by investigators, and proper infection control techniques employed. Regarding estimation of time of death, the subfields of forensic entomology and forensic botany can be illuminating. Identifying the deceased may prove challenging, especially when in relation to mass fatality events.

PCR (polymerase chain reaction) is a laboratory technique which can make vast amounts of DNA from a small amount of starter sample much faster and more efficiently than previous techniques, and has found extensive application in forensics. It was created by American biochemist Kary Mullis, who received the 1993 Nobel Prize in Chemistry for it. This built on earlier work by British geneticist, Alex Jeffreys, who came up with DNA fingerprinting/profiling. The apparatus to “amplify” DNA segments via PCR is sometimes referred to a thermal cycler.

But forensic science, as noted above, has been subjected to considerable criticism and controversy in recent years. In February, 2009, the National Academy of Science released a statement called ” ‘Badly Fragmented’ Forensic Science System Needs Overhaul: Evidence to Support Reliability of Many Techniques is Lacking,” referring to a congressionally-mandated report from the National Research Council. With the exception of nuclear DNA analysis, according to the NRC, “no forensic method has been rigorously shown to be able to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source.” Questionable techniques include fingerprint and tire track analysis; comparative bullet-lead analysis (used by the FBI for over four decades); bite-mark evidence from forensic dentistry; and even DNA evidence, which scientists have been able to show can be fabricated. So much for the gold standard!

In April of this year, reflecting these controversies, the U.S. Attorney-General, J. Sessions announced he will end a Justice Department partnership with independent scientists to raise forensic standards. Specifically, he said he would not renew the National Commission on Forensic Science, a 30-member panel of scientists, judges, crime lab leaders, and lawyers. Criticizing the government’s decision, six leading research scientists on the panel wrote that ‘limiting the relevant scientific community’ to forensic practitioners is a disservice to that field and to the criminal justice system.”

~ Wm. Doe, Ph.D. – August, 2017

SELECTED REFERENCES

W.G. Eckert (1997). Introduction to Forensic Sciences. Boca Raton, FL: CRC Press.

Nobel Prize website. 1993 NP for Chemistry: Kary Mullis. Summary of PCR technology.   http://www.nobelprize.org/nobel_prizes/chemistry/laureates/1993

LeMoyne Snyder (1977). Homicide Investigation. 3rd Ed. Springfield, Ill.: Charles Thomas Pub.

Vernon J. Geberth (2006). Practical Homicide Investigation: Tactics, Procedures, and Forensic Techniques. Boca Raton, FL: CRC/Taylor & Francis.

“DNA Evidence Can Be Fabricated, Scientists Show,” New York Times, August 17, 2009.

 

 

Homeless Scholar Blog ~ TRANSIENT PSYCHOSIS

“Creative people who can’t help but explore other mental territories are at greater risk, just as someone who climbs a mountain is more at risk than someone who walks along a village lane.” – R. D. Laing

“No single definition of psychosis will satisfy all clinicians,” says a textbook of psychiatry for medical students. “For practical purposes, psychosis may be defined as an inability to distinguish what is real from what is not, even when evidence of reality is clearly available.” Thus, it is considered an impairment of reality testing, rather than a general loss of mental functioning. At least one of the following symptoms is present: delusions, hallucinations, disorganized speech and/or behavior, or catatonia. A person whose misperception of reality can be corrected by evidence is not considered psychotic. Regarding symptoms duration, transient psychosis, labeled “brief psychotic disorder” in the DSM-5, is defined as the presence of one or more psychotic symptoms with a sudden onset and full remission within one month. (From 1-6 months: schizophreniform disorder; 6 months or longer: schizophrenia. However, transient psychosis has a substantial rate of recurrence.)

Stress plays a major role in most theories of psychosis etiology. A recent study of 39 low- and middle-income countries used logistic regression analysis to test the hypothesis that stress sensitivity would be associated with psychotic experiences. Not surprisingly, poorer people reported more stress sensitivity as well as more psychotic experiences compared to the financially better off. “Greater stress sensitivity,” write the authors, “was associated with increased odds for psychotic experiences, even when adjusted for co-occurring anxiety and depressive symptoms.” (DeVylder et al, 2016). A cross-sectional study of 200 members of the UK general public with self-report questionnaires showed that a history of trauma was significantly associated with both persecutory ideation and hallucinations. “The study,” say the authors, “indicates that trauma may impact non-specifically on delusions via affect but that adverse events may work via a different route in the occurrence of hallucinatory experience.” (Freeman & Fowler, 2009). In a study of psychotic-like experiences in nonpsychotic help-seekers, Yung et al (2006) used the 3-factor CAPE instrument (Community Assessment of Psychotic Experiences), the factors being Bizarre Experiences, Persecutory Ideas, and Magical Thinking. As an example of the range, the Persecutory Ideas items go from “Do you even feel as if some people are not what they seem to be?” to “Do you ever feel as if there is a conspiracy against you?”. Bizarre Experiences and Persecutory Ideas were “associated with distress, depression, and poor functioning…and may confer increased risk of development of full-blown psychotic disorder.” Zverova (2012) concluded from a study of caregiver burden that “psychotic symptoms in overloaded individuals may be more common than was previously thought.” Finally, certain stressors may turn standard depression into a psychotic form of it.

Psychosis is to be distinguished from dissociative disorders, which also involve a distancing from reality. A sense that one is unreal: this is depersonalization. (If one feels that the self  is real, but that the world is not, that is derealization. They often occur together. Depersonalization is categorized as a type of dissociation because of this connection. It often happens transiently, as a result of sleep deprivation, intoxication, extreme fatigue, or various types of psychological stress. When depersonalization is persistent or recurrent and causes significant distress, it is considered to be pathological. Even then, though, reality testing remains intact. As for etiology, there is no consensus among experts and very little systematic research. Some treatment successes have been reported with cognitive and behavioral therapy, hypnosis, and psychodynamic psychotherapy, as well as distraction techniques. The evidence for the efficacy of medication is weak, but strongest for those whose depersonalization has an anxiety component.

Mental disorders are categorized as categorical (present or absent) or dimensional (on a continuum of severity). Psychotic phenomena could be defined dimensionally from most transient in basically non-psychotic individual to a permanent state of affairs in someone who is apparently incurably divorced from reality without medication. “Approximately half of patients diagnosed with brief psychotic disorder retain this diagnosis, the other half will evolve into either schizophrenia or a major affective disorder. There are no apparent distinguishing features between brief psychotic disorders, acute-onset schizophrenia, and mood disordered psychotic features on initial presentation” (Kaplan & Saddock, 2000). Using the CAPE questionnaire mentioned earlier, Hanssen et al (2003) found that patients with anxiety and mood disorders “had elevated scores on positive psychosis items, indicating that expression of psychosis in non-psychotic disorders is common.” Verdoux & vas Os (2002), in a research review, conclude that by taking the dimensional approach examining the spectrum of psychosis-like signs in non-clinical populations, the risk factors for actual psychosis may be better elucidated than with the categorical approach.

Environmental adversity (e.g., familial disruption or combat-related trauma) combines with cultural expectations and support systems to manifest symptoms. By definition, the madness is short-lived, although often recurrent. But this brings up the issue of social pathology, that is, should we accept as inevitable a certain level of societal stress for the masses or is it within our power to mitigate this? Most people don’t go crazy from systemic stress, but those who do, highlight the pernicious social forces that are at play. Psychopathology should always prompt a certain questioning of the rationality of the social system within which it is manifested.

(Of course, this is nothing new, and was actually something of an in-vogue theme in the 1960s. Radical psychiatrists like R.D. Laing and Thomas Szasz were key figures in the development of an anti-psychiatry movement (though Laing and Szasz held opposing political views: left and right, respectively).) But the moral, social, and psychological question remains: to what extent should one adapt to a society or world perceived as itself being mad? Each person must answer this for themselves, but those who choose a radical path in this regard must be cognizant of the risk of possible transient psychosis (or at least anxiety and/or depression) should they overestimate their psychic resources when initiating such psycho-cultural resistance, especially if it leads to social isolation. That risk would be greatly reduced if the resistance were to be carried out as part of a collective effort at radical social change.

~ Wm. Doe, Ph.D. – July, 2017

SELECTED REFERENCES

American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders, 5th Ed. DSM-5. (Washington, DC: American Psychiatric Publishing).

R.J. Waldinger (1997). Psychiatry for Medical Students.

J.E. DeVylder et al (2016). Stress sensitivity and psychotic experiences in 39 low- and middle-income countries. Schizophrenia Bulletin, 42 (6), 1355-1362.

D. Freeman & D. Fowler (2009) Sep 30. Routes to psychotic symptoms: Trauma, anxiety, and psychosis-like experiences. Psychiatry Research 169 (2): 107-112.

A.R. Yung et al. (2006). Psychotic-like experiences in non-psychotic help-seekers: Associations with distress, depression, and disability. Schizophrenia Bulletin, 32 (2).

Kaplan & Saddock (2000). Comprehensive Textbook of Psychiatry, 7th Ed., p. 1242.

M. Hanssen et al. (2003). How psychotic are individuals with non-psychotic disorders? Soc Psychiatry Psychiatr Epidemiol, 38: 149-154.

H. Verdoux & J. van Os (2002). Psychiatric symptoms in non-clinical populations and the continuum of psychosis. Schizophrenia Research, 54: 59-65.

M. Zverova (2012). Transient psychosis due to caregiver burden. Neuro Endocrinol Lett., 33 (4).

 

Homeless Scholar Blog ~ MOVEMENT DISORDERS

“A right-handed, Hispanic man in his late sixties was admitted with the acute onset of slurred speech, confusion, and sudden flailing movements of the right arm…The initial examination revealed a confused, agitated man with sudden involuntary movements of his right hemibody. These flailing movements increased with attempts to perform functional tasks, such as feeding and grooming…The patient’s agitation was difficult to control…He required a safety belt in the wheelchair, as well as safety pads….”

Fortunately, most movement disorders are not as extreme as the one described here, which is known as hemiballismus, usually caused by destructive lesions of the contralateral subthalamic nucleus. The wild, forceful, movements are typically absent during sleep, but increased by stress. Although disabling, the condition is usually self-limited, lasting 6 to 8 weeks. If severe, it can be treated with an antipsychotic, preferably olanzapine, for one to two months. Hemiballismus is a severe form of the general type, chorea: nonrhythmic, jerky, rapid, nonsuppressible movements, mostly of the distal muscles and face. A slower form of chorea is athetosis–writhing, sinuous movements resembling a dance. Both chorea and athetosis result from impaired inhibition of thalamocortical neurons by the basal ganglia of the forebrain, which may be caused by excessive dopamine.

For voluntary movement to occur, there must be a complex interaction of three areas of the central nervous system: the pyramidal tracts (bundles of motor fibers going from cerebral hemispheres through the “pyramid” structures of the medulla in the brainstem, then down the spinal cord, and causing voluntary movement); the extrapyramidal system (the basal ganglia, i.e., collections of nerve cells located deep in the forebrain, not involving the medullary pyramids, and causing involuntary movement); and cerebellum, which controls coordination. Since most lesions causing movement disorders occur in the second system, the disorders are sometimes called extrapyramidal. These disorders can be unilateral, bilateral, focal (affecting a single part of the body) or segmental (affecting two or more adjacent parts of the body).

Symptoms of movements disorders include excessively slow or difficult walking, stiffness, tremor, abnormal posture, loss of balance, incoordination, involuntary or abnormal movement, twitches, muscle spasms, and frequent falls. There are many causes (e.g., Parkinson’s Disease, stroke, Huntington’s Disease, etc.). Sometimes, the cause is unknown. Potentially influential factors included infections, inflammation, stroke, toxins, trauma, metabolic disorders, autoimmune and hereditary diseases, and reactions to certain medications.

There are also functional (psychogenic/stress-induced) MDs, characterized by spasms, shaking, or jerks, speech disorder, and bizarre gait. There may be fixed posture and/or active resistance against passive movement. They can mimic organic syndromes, such as tremor, dystonia, chorea, tics, myoclonus, stereotypies, parkinsonism, and paroxysmal dyskinesias. Characteristics include abrupt/sudden onset; being triggered by emotional or physical trauma: fatigue: psychiatric co-morbidity; lack of emotional concern about the disorder (“la belle indifference”); exposure to neurological disorders during one’s occupation or while caring for someone with similar problems; and the disappearance of the movements with distraction, maneuver or pressing on a particular spot, or application of a tuning fork (along with suggestions). Some patients may be unable to accept a “psychiatric” diagnosis such as this.

One example of a movement disorder deserves special mention here as it is iatrogenic: namely, tardive dyskinesia, a side effect of antipsychotic medication. This refers to involuntary, repetitive movements of the tongue, lips, face, trunk, and extremities that occur in patients treated with long-term dopaminergic antagonist medications. It has been associated with polymorphisms of dopamine receptor genes, but the exact mechanism of the disorder is unknown. The overall risk of developing it among those on antipsychotic meds is between 30-50%. Various other medical/neurological conditions need to be ruled out first, such as Parkinson’s Disease, strokes, and Huntington’s Disease. The most effective treatment is prevention. Most psychiatrists use a standardized rating scale called the Abnormal Involuntary Movement Scale (AIMS) to screen for tardive dyskinesia at least once each year.

A more common example of a movement disorder is essential tremor. Sometimes confused with Parkinson’s Disease, it causes involuntary and rhythmic shaking, more often in the hands, especially when one does simple tasks such as drinking from a glass or tying shoelaces. Usually not dangerous, it can be severe and worsen over time. It is most often seen in people over 40. Treatment is far from satisfactory and usually takes the form of beta blockers such as propranolol and anti-seizure medications like primidone. Disabling grogginess is the chief complaint, MR-guided ultrasonography, approved by the FDA, is said to show some promise, although it does kill a small part of the brain, which can lead to abnormal sensations and affect gait.

In general, apart from specific drug therapies for specific syndromes causing the given movement disorders (e.g., L-dopa for Parkinson’s Disease), physical therapy may help MD sufferers. (Sometimes, post-stroke MDs are self-limiting.) The proper PT can increase strength, flexibility, and balance with specific attention to gait training. The aforementioned psychogenic movement disorders in particular have proven very difficult to treat, but short-term and long-term successful outcomes have been documented with intense programs of combined physical and occupational therapy based on a behavioral motor reprogramming approach.

~ Wm. Doe, Ph.D. – June, 2017

 

 

Homeless Scholar Blog ~ MEMORY: NORMAL and ABNORMAL

“The growth and maintenance of new synaptic terminals make memory persist…The ability to grow new synaptic connections as a result of experience appears to have been conserved throughout evolution…The same [molecular] switch is important for many forms of implicit memory in a variety of other species…from bees to mice to people.” – Eric Kandel

There are some who worry about their memory as they get older, viewing slips of function as a possible early sign of dementia. Not to deny a degree of cognitive decline as part of normal aging, but there are forgetful young people, too, and memory slips in those who are older are often actually trivial lapses of attention. So, before getting into the neuropathology, a few words about normal memory.

The hippocampus, located deep in the medial temporal lobes of the brain (within the limbic system) is generally cited as the area most associated with the formation of new memories, as well as spatial cognition. Its shape has been likened to that of a seahorse, and there is one on each side of the brain. Short-term memory (STM) lasts only about 18 seconds (barring rehearsal). and it is sometimes a challenge to get such information into relatively permanent storage. It is to be distinguished from working memory, which is a concept referring to structures and processes involved in a sort of active attention (as opposed to the passive capacity of STM). (Long-term potentiation refers to a relatively long-lived increase in synaptic strength, the biological correlate of long-term memory (LTM). STM involves the alteration of pre-existing proteins, whereas LTM involves actual protein synthesis. Memories are not always straightforward, and there is a considerable amount of “creative restructuring” at work, which process is called “reconsolidation”. (Those who are impressed by their “flashbulb” memories sometimes doubt the force of reconsolidation, but the accuracy of such memories has been questioned by some reasearcher. Vividness does not necessarily equal accuracy. In connection with this, it is worth mentioning psychologist Elizabeth Loftus, whose research questioning the reliability of “recovered” memories of some individuals charging childhood sexual abuse is an ongoing source of controversy.)

In the early 1950s, Canadian neurosurgeon Wilder Penfield published the findings of his operations on patients on with intractable epilepsy, which had a major impact on this field. In attempting to distinguish normal from abnormal brain tissue in conscious patients under local anesthesia, Penfield stimulated various areas experimentally, and in doing so sometimes startling responses when patients stated they were experiencing vivid, long-forgotten memories from their distant pasts. Brenda Milner, a neuropsychologist working with him, was contacted by another surgeon, William Scoville, concerning an epilepsy patient whose capacity for forming new memories had been destroyed (as happened with some of Penfield’s patients.) This was Henry Molaison, better known as H.M. When Milner first met Molaison, she encountered a man who could not maintain memory of anything for more then 30 seconds. Each day, he had to be re-introduced to her as if for the first time. His recollection of most of his past was intact. She wondered what he was still capable of and gave him various tests. In one test, tracing star shapes over several tries using a mirror, he improved his skill at the task, doing as well as normal subjects. This led to her realization that a different type of memory, the neural substrates of which were located away from the hippocampus, was at work: implicit, or procedural memory, an unconscious form of it, which appears to be controlled by the basal ganglia and cerebellum. The psychiatrist-turned-neuroscientist, Eric Kandel said that Milner’s study of H.M. “stands as one of the great milestones in the history of modern neuroscience. It opened the way for the study of the two memory systems in the brain, explicit and implicit, and provided the basis for everything that came later–the study of human memory and its disorders.”

Dr. Kandel identified the molecular machinery for memory formation using the giant marine snail Aplysia as a model and focusing on some of its reflexes. This interesting creature has extra-large, actually visible, neurons, which are easily manipulated. Despite the vast difference between Homo sapiens and invertebrates, the basic synaptic function of memory has been conserved throughout evolution, as later research indicated. Kandel showed that while short-term memories involve certain changes in protein, long-term memories actually create new proteins that alter the shape and function of the synapse, the effect of which is to allow more neurotransmitter to be released. The specific protein which changes the STM to LTM is known as CREB, which is short for “cAMP response element binding”. (Some may remember cAMP from BIO 101 (or the equivalent); it stands for cyclic adenosine monophosphate, a chemical messenger important in many biological processes.) For this work, he shared the 2000 Nobel Prize for Physiology.

A note on post-traumatic stress syndrome: The mechanisms proposed to explain how PTSD can follow trauma (especially from brain injury) involve fear conditioning, memory reconstruction, and post-amnesia resolution. The fear conditioning model posits extreme autonomic nervous arousal at the time of a traumatic event, resulting in the release of stress neurochemicals causing an overconsolidation of trauma memories. In the memory reconstruction model, the patient synthesizes traumatic memories from available information, and these images may change over time. Finally, the post-amnesia resolution model states that the trauma on a psychological level is delayed until after the amnesia directly following the event. Misattribution of PTSD symptoms to strictly neurological effects can be counter-therapeutic in that it reduces people’s expectations for recovery unnecessarily.

Memory loss could be the result of either stress, psychological problems, depression, or various medical conditions, including thyroid disease, diabetes, hypertension, vitamin deficiencies, arteriosclerosis and stroke. Then of course, there are the neurological disorders: dementia (vascular, frontotemporal, Lewy body, Alzheimer’s disease), progressive supranuclear palsy, Parkinson’s disease, Huntington’s disease, and corticobasal degeneration.

Dementia in general is characterized by memory deterioration and an increasing inability to manage the personal affairs of daily life. Personality changes are the norm with psychotic symptoms often developing later. Early in Alzheimer’s, the memory impairment and other cognitive deficits tend to be mild compared to the changes in behavior. Those afflicted may do things completely out of character and yet have little understanding that they have acted in a socially inappropriated manner. After years of decline, the outcomes are weight loss, incoherence, and inability to walk or dress without assistance. Vacuoles (microscopic holes), amyloid plaques, and neurofibrillary tangles are evident in the cells upon autopsy.

Drug therapy for Alzheimer’s is basically in two categories: cholinesterase inhibitors (e.g., Aricept) and a glutamate regulator (memantine). These medications help mask the symptoms of Alzheimer’s and are sometimes used together, but they do not treat the underlying disease, merely help mask the symptoms. Cognitive rehabilitation for memory deficits has met with only limited success. According to a 2016 Cochrane review of 16 databases, in stroke patients, benefits were reported on subjective short-term measures but these did not persist. No benefits were reported in objective memory measures, mood, or daily functioning. (However, at least some of these reports may be due to poor methodological quality of the included studies.)

To end on a philosophical note, it is worth mentioning the relation of memory to identity. The memory criterion for personal identity (i.e., continuity of self through time) has been criticized as uninformative, since one can by definition remember only one’s own experiences and no one else’s. To meet this objection, some scholars have introduced the notion of “quasi-memory” (with the implication of personal identity removed), but this has not been established empirically. There are a number of other related philosophical problems, but on the phenomenological level of a person suffering from dementia, brain injury or other pathology involving loss of memory, there is often a reported sense of some loss of identity, and those patients may also need help coping with the emotional stress of such neurological self-alienation.

~ Wm. Doe, Ph.D. – May, 2017

 

The Homeless Scholar Blog ~ MEDICAL IMAGING: A NOTE on the PHYSICS

“I should like here to outline the method for electrons and protons, showing how one deduces the spin properties of the electron and then how one can infer the existence of positrons with similar spin properties and with the possibility of being annihilated in collisions with electrons.” – Paul Dirac, Nobel lecture, 1933

It may seem a bit absurd for me to discuss the physics of even one of these technologies, since whole tomes have been devoted to that aspect of each of them, but this is only meant to be a sketched appreciation of the historical background, and there will be no math beyond a reference to the conceptual. The focus is on MRI (magnetic resonance imaging), CT/CAT (computed tomography), and PET (positron emission tomography). The term, “tomography” refers to imaging by sections, cyber-slicing, so to speak, through the use of a penetrating wave of some kind. When clinical assessment and X-rays are insufficient to diagnose an illness or injury, physicians will resort to more high-powered methods, such as the ones just listed, although often just an ultrasound will suffice (for example, when pancreatitis or even pancreatic cancer is suspected). Overuse of the radiation-based scans is a problem, sometimes referred to as “defensive medicine”, one which I will return to at the end. The indication of CT and MRI overlap (especially for cancer detection), but on the whole, the former is used to visualize bone (there being virtually no water in bone), and the latter, for soft tissue evaluation. If a visualization of metabolic process seems necessary as a follow-up, PET is sometimes subsequently ordered.

Nicola Tesla discovered the Rotating Magnetic Field way back in 1882, but the specific phenomenon on which MRI is based (nuclear magnetic resonance (NMR)) was not discovered until 1937, when Isidore I. Rabi, a physicist at the Pupin Physics Laboratory in NYC, did experiments demonstrating the resonant interaction of magnetic fields and radiofrequency pulses. (“Resonance” in physics refers to a matching of frequencies; in this case, that of the radio waves and the vibrations of the protons from water molecules.) The idea for NMR actually came from an obscure Dutch physicist named Cornelius Gorter, in the previous year, but he could not demonstrate it experimentally due to a limited setup. Rabi, with his superior technological resources, was able to detect magnetic resonance in a “molecular beam” of lithium chloride. In practical terms, this meant that the structure of chemical compounds could now be identified spectroscopically. Several years later, Block and Purcell simultaneously demonstrated NMR in condensed matter (water and paraffin, respectively).

When a patient is placed in an MRI machine, a powerful magnet pulls the positively charged protons of the body’s water molecules into alignment, after which a radio wave pulse of the same frequency as the particles’ oscillation knocks them askew. When the radio frequency pulse is turned off, the protons relax and return to alignment, sending back “captured” information about the structure of which they are a part. This signal appears as a diffuse, amorphous image called k-space. To get a coherent version of this image, redundant information must be subtracted from it via a computer algorithm called the Fast Fourier Transform (Liney, 2010). The result is the remarkable detailed pictures of the brain and other bodily organs we are used to seeing in reproductions. (The Fourier transformation decomposes TIME signals into sinusoidal components of varying FREQUENCY. Thus, it uses mathematics to simply physical phenomena for technological applications.) A chemist and a physicist (Paul Lautebur and Peter Mansfield) were given the Nobel Prize for the invention of the MRI, but it was a physician, Raymond Damadian, who actually built the first NMR body scanner, which machine can be viewed in the Smithsonian.

The origins of the CT scan go back to 1917 when Austrian mathematician, Johann Radon, came up with math for a gravitational theory showing that an object can be reconstructed from the infinite set of all its projections. This became known as the Radon transform, which was first applied in radioastronomy. Later, it became applied to image reconstruction from radiological information based on the concept of the “line integral”, which refers to the sum of absorptions or attenuations along the path of the X-ray beam as it loses intensity. In a CT scan, a rotating tube sends X-rays through the patient to detector on the other side. The exit beam is integrated along a line between the X-ray source and detector. Measurement of the intensity involves a linear attenuation coefficient as a function of a given location and the effective energy. The basic measurement of CT is, then, the line integral of the linear attenuation coefficient. Without knowledge of Radon’s work, Allan Cormack, a South African physicist, developed the equations necessary for image construction, the process in which X-ray projections of a sample taken at a variety of different angles are combined, using the computer, to reconstruct the image being viewed. Godfrey Hounsfield, a British engineer, designed the apparatus that would permit image construction. Cormack and Hounsfield jointly received the Nobel Prize for their work.

As mentioned, sometimes PET scans are ordered to image functional, processes as opposed to structure. The most common use is to detect and track possible cancer metastasis. In this technology, radio-isotopes which are analogs of sugar are injected and followed by the machine. These isotopes decay, freeing positrons which collide with the electrons of the tissues of interest, whereupon both are annihilated, evanescently creating antimatter. The collision releases gamma rays, which are detected and tomographically processed to show functioning at the metabolic level. “From our theoretical picture,” wrote Dirac in 1933, we should expect an ordinary electron, with positive energy, to be able to drop into a hole and fill up this hole, the energy being liberated in the form of electromagnetic radiation. This would mean a process in which an electron and a positron in which an electron and a positron annihilate each other.” PET scans can be traced directly back to this antimatter prediction of the famous British quantum physicist, stated by him in 1928. Only four years later, Carl David Anderson of Caltech discovered the positron by studying the tracks of cosmic rays particles in a cloud chamber. (I say “only” because, for example, it took some twenty years for experimental validation of James Clerk Maxwell’s prediction of electromagnetic radiation. This was done by Heinrich Hertz who, in 1887, discovered radio waves. Interestingly, he believed that they were “of no use whatever.”)

There are eight parameters related to the radiation of CT scans, the three most important being tube voltage (mAs), peak kilovoltage (kVp), and pitch, a ratio of the patient table (gantry) travel divided by the total slice collimation width of the radiation beam. Units of measurement for radiation can be confusing because many nuclear physicists never converted to the metric system. Sieverts and Grays refer to very large amounts of radiation. Generally, in radiology journals, you will see the unit “mSv” for milliSievert or “mGy” for milliGray. The latter refers to absorption by a given tissue; the former, to the biological impact of the absorption. Sometimes, the terms “rems” and “rads” (expressing the same information as Sieverts and Grays) are still used. A typical chest X-ray is only 0.02 to 0.1 mSv, whereas a CT scan of the abdomen is usually 8-10 mSv (not counting second passes). Yet, even today, you will hear some medical personnel claim that a CT scan “is like getting a a few chest X-rays.” I have heard this myself.

Ionizing radiation is classified as a carcinogen by the FDA, and children are the most vulnerable, due to their size and remaining life span. (However, it has been shown that the risk may be twice as high as previously estimated for middle aged patients.) (Shuryak & Brenner, 2010). A 2013 study has estimated that the four million annual pediatric CT scans nationally will cause 4,870 cancers in the future. The researchers claim that over 40% of these cancers could be prevented by reducing the highest 25% of doses to the median (Miglioretti et al, 2013). One hundred milliSieverts (10 rems) is generally considered the limit of low-level radiation.

While the theme of this post is the physics of medical imaging, a related theme, as noted earlier, is that of defensive medicine, which has been defined as “departing from normal practice as a safeguard from litigation. It occurs when a medical practitioner performs treatment or procedures to avoid [the charge] of malpractice.” (Sekyar & Vyas, 2013). Such moves can pose health risks to the patient as well as increase healthcare costs. In some situations, defensive medicine is justified, but all too often, it bespeaks potentially hazardous ineptitude. In the context of medical imaging using the relatively high radiation of CT scans, the problem is obvious, especially for the developing bodies of pediatric patients. Radiation, again, is carcinogenic, and residents, especially, unsure of their diagnostic ability, may order such tests inappropriately and frequently, in lieu of exercising clinical judgment. The U.S. could benefit from adopting European referral guidelines for imaging (part of the EURATOM Basic Safety Standards Directive), which require a space on the referral sheet for labeling the imaging request in terms of grading pretest probability and documenting the urgency of imaging (Kainberger, 2017).

~ Wm. Doe, Ph.D. – April 20, 2017

SELECTED REFERENCES

P.A.M. Dirac (1933). Theory of Electrons and Positrons. Nobel lecture. https://www.nobelprize.org/nobel_prizes/physics/laureates/1933/dirac-lecture.pdf

G. Liney (2010). A Definitive Guide for Medical Professionals. London: Springer-Verlag.

I. Shuryak, D.J. Brenner (2010). Cancer risk after radiation exposure in middle age. J. Natl Cancer Inst., Nov. 3: 102(21).

D.L. Miglioretti et al. (2013). The use of CT in pediatrics and associated radiation exposure. JAMA Pediatrics, Aug. 1; 167(8).

M.S. Sekhar, N. Vyas (2013). Defensive medicine: A bane to health care. Ann Med Health Sci Res., 3(2).

F. Kainberger (2017). Defensive medicine and overutilization of imaging–an issue of radiation protection. Wien Klin Wochenschr, 129.

 

Homeless Scholar Blog ~ UNDERSTANDING PAIN

“Few things a doctor does are more important than relieving pain…pain is soul-destroying. No patient should have to endure intense pain unnecessarily. The quality of mercy is essential to the practice of medicine; here, of all places, it should not be strained.” – Marcia Angell, M.D.

Pain is such a vast, important topic in medicine, physiology, and psychology that one hesitates to essay a brief, general treatment of the subject, but such an attempt follows anyway. The topic is complicated because of self-reports with no apparent physiological basis, individual differences in subjective experience of pain, and ethical restrictions on pain research. Apart from the acute/chronic distinction, which often uses 6 weeks as  a demarcation, but actually tends to be rather arbitrary, types of pain are standardly categorized as nociceptive, neuropathic, and psychogenic. (Nociceptive includes somatic and visceral, sometimes a difficult distinction to make. In general, though, these can  be distinguished: unlike somatic pain, visceral pain is vague, diffuse,, and poorly defined and located. Symptoms of the latter may include nausea, vomiting, and changes in vital signs. It is usually perceived in the abdominal midline or chest.) Nociceptive is the most common type and is caused by the detection of noxious or potentially harmful stimuli by the mechanical, chemical, or thermal pain detectors. The neuropathic type is associated with nerve damage following infection or injury, rather than stimulation of pain receptors. (Nerve injury leads to incorrect signals being sent to the brain, confusing it as to the location of the problem. An example would be phantom pain following limb amputation. Neuropathic pain is also thought to be involved in reactions to chemotherapy, alcohol, surgery, HIV infection, and shingles. With shingles, for example, a subtype of neuropathic pain called allodynia often comes into play; that is, pain that happens from simple contact that is not normally painful, such as clothes touching the skin. This is to be distinguished from hyperalgesia, a hypersensitivity to mildly painful events. Less knowledgeable clinicians sometimes dismiss neuropathic pain as psychogenic.) The psychogenic type is caused, increased, or prolonged by psychological factors.

There are numerous theories of pain, none of which completely accounts for all aspects of pain perception. Most notable among them are: Specificity, Intensity, Pattern, and Gate Control. The specificity theory hold that specific pain receptors transmit signals to a “pain center” in the brain. While true, this doesn’t account for psychological factors affecting pain perception. Pattern theory posits that somatic sense organs respond to a dynamic range of stimulus intensities. A pattern of neuronal activity encodes information about the stimulus.Intensity theory denies distinct pathways but states rather that the number of impulses determines the intensity. The gate control theory is the most widely accepted of the four. Proposed in 1965, this theory states there is a control system in the dorsal horn of the spinal cord that determines which information passes into the brain. This function is controlled by the substantia gelatinosa (SG), which opens the “gate”, causing transmitter cless to fire. Non-painful stimuli will excite the SG to close the gate, resulting in a reduced pain signal. (There are some problems with the classic gate control theory, though. For example, it leaves out of the modulatory system model what we now know to be relevant descending small-fiber projections from the brainstem.) Recent neuroimaging data suggests that brain functions may not be modular, but rather, are likely to involve networks. Moreover, in chronic pain, neuroplasticity may occur, altering network dynamics (Baliki, 2011).

Nociception is facilitated by both neurotransmitters (e.g., glutamate, serotonin) and neuropeptides, which class compromises substance P, ACTH, somatostatin, and many others, including the enkephalins. Substance P is especially active in pain transmission (the “P” standing not for “pain”, but “preparation”). It is a peptide containing 11 amino acids which bind to so-called NK-1 receptors located in the nociceptive neurons of the dorsal horn of the spinal cord. Substance P also occurs in the brain, where, in addition to pain, it is also associated with nausea, respiration rate, anxiety, and neurogenesis. Unlike glutamate, substance P has been associated with relatively slow excitatory connections, and hence with chronic pain sensations transmitted by strands known as C-fibers. In addition, through initiation of expression of chemical messengers called cytokines, substance P may play a part in neurogenic inflammation, as a response to certain types of infection or injury.

Because there may be a significant subjective component to chronic pain, much research has been devoted to the psychological aspects of the complaint. People have active or passive pain coping styles based on implicit theories of pain. If the pain is seen as malleable, this involves an incremental theory with active coping strategies. Patients who feel helpless may catastrophize, amplify the sense of pain , thereby making the situation worse than it needs to be. Research findings from this area, a group of medical psychologists (Higgins, 2011) write, “may represent an underlying social-cognitive mechanism linked to important coping, emotional, and expressive reactions to chronic pain.”

“Pain catastrophizing [PA],” another group of researchers note (Quartana, 2009), “is characterized by the tendency to magnify the threat value of a pain stimulus and to feel helpless in the context of pain, and by a relative inability to inhibit pain-related thoughts in anticipation of, during, or following a painful account. Assessment instruments employed are usually the Coping Strategies Questionnaire and the Pain Catastrophizing Scale. These authors have criticized the literature on the subject for poorly established validity and reliability. In addition to problems with assessment, there is the issue of the construct of PA itself, which often substantially overlaps with other concepts (e.g., pain-related anxiety and somatization.) The authors advance an integrative model which “explicitly notes and emphasizes possible inter-relationships between theoretical mediators and moderators of catastrophizing’s effects on pain and pain-related outcomes, such as disability and social networks.” Relevant to this model will surely be carefully designed studies investigating not only fear and anxiety, but also the sense of helplessness in the wake of pain.

Returning to the opening thought, it should be added that the propensity in American medicine to withhold opioid drugs for pain is a cultural form of deleterious stupidity that should be overcome. The risk of addiction is markedly offset by the aggregate of suffering due to insufficient medication, one outcome of which may be suicide (Hassett, 2014).

~ Wm. Doe, Ph.D., March 20, 2017

SELECTED REFERENCES

M.N. Baliki et al (2011). The cortical rhythms of chronic back pain. J. Neurosci., 31.

N.C. Higgins et al (2015). Coping styles, pain expressiveness, and implicit theories of chronic pain. J. of Psychology, 149 (7).

P. J. Quartana et al (2009). Pain catastrophizing: a critical review. Expert Rev. Neurother., 9 (5).

A.L. Hassett, M.A. Ilgen (2014). The risk of suicide mortality in chronic pain patients. Curr. Pain Headache Res.. 18.

 

 

The Homeless Scholar Blog: NEURAL EFFICIENCY and the IDEA of INTELLIGENCE

“Ultimately, it would certainly be desirable to have an algorithm for the selection of an intelligence, such that any trained researcher could determine whether a candidate’s intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate’s intelligence is reminiscent more of an artistic judgment than of a scientific assessment.” – Howard Gardner, educational psychologist

A while back, I wrote a post on general intelligence, noting that experts disagree as to what it actually is and emphasizing a broad conceptual analysis of the idea. Yet, for some reason, I kept having nagging doubts: “Why not keep this simple? An intelligent person is simply one with greater neural efficiency. Measure that factor correctly and you will have a quantification of this concept.” Perhaps a more efficient nervous system is more “fluid”, more adaptable, and thus more fit and functional in general. Raymond Cattell had two divisions of intelligence: Fluid and Crystallized, the latter being our acquired knowledge and skills. But the former is about a more “basic” sort of intelligence, how we process information without relying on a storehouse of previously acquired knowledge , but instead using abstract reasoning, pattern recognition, and general problem solving ability. Thrown into an unfamiliar situation with a requirement to think fast, all that stuff we’ve picked up over the years might avail us little or nothing. As Cattell put it, “It is apparent that one of these powers…has the ‘fluid’ quality of being directable to almost any problem.” Fluid intelligence (FI) is measured using a non-verbal, multiple choice picture completion test called the Raven Progressive Matrices, which focuses on detecting relationships among images.(Also used are the Cattell Culture Fair Test and the performance subscale of the Wechsler Adult Intelligence Scale (WAIS).

Peter Schonemann, who studied under Cattell, stated that the g factor (the letter standing for general/basic smarts) does not exist, and that those who emphasized it, like Arthur Jensen, were distorting the original findings of Charles Spearman with the effect, intentional or not, of putting racial minorities at a disadvantage in terms of policy decisions based on pro-hereditarian research. Canadian psychologist, Keith Stanovich argues that IQ tests, or their proxies (e.g., the SAT), do not effectively measure cognitive functioning, because they fail to assess real-world skills of judgement and decision-making. But he does not say the tests should be abandoned, just revised to encompass measurement of the aforementioned skills. Going back to the basic semantic question, Australian educational psychologist, R. W. Howard breaks intelligence down into three categories: Spearman’s g; a property of behavior; and a set of abilities. ‘Each concept,’ he writes, ‘contains different information, refers to a different category, and should be used in different ways…A concept is never right or wrong, but is only more or less useful.’ Matching a given use of the term, ‘intelligence’, with its appropriate category would eliminate much confusion. Furthermore, recent research on IQ testing suggests that at least part of what is being measured is motivation.

Brighter individuals display lower brain energy expenditure while performing cognitive tasks: this is the neural efficiency hypothesis in a nutshell. Such expenditure can be measured using PET scans with cortical glucose metabolic rate used as a correlate of abstract reasoning and attention. Considering “reconfiguration” to reflect said expenditure, recent research has found that “brain network configuration at rest was always closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence (Shultz and Cole, 2016). Dix et al (2016) have noted that high fluid intelligence and learning cause fast, accurate analogical reasoning. Yet,”for low FI, initially low cortical activity points to mental overload in hard tasks [and that] learning-related activity increases might reflect an overcoming of mental overload.” Swanson & McMurran (2017) conclude from a randomized control study that “improvement in working memory, as well as the positive transfer in learning outcomes, are moderated by fluid intelligence.” On the other hand, Neubauer and Fink (2009) note that, as opposed to moderate-difficulty tasks, when the more able individuals have to deal with very complex ones, they will invest more cognitive resources. “It is not clear,” write the authors, “if this reversal of the brain activation-intelligence relationship is simply due to brighter individuals’ volitional decision to invest more effort as compared to the less able ones, who might have simply ‘given up’ as they experience that the task surpasses their ability.” It is concluded that new study designs are necessary to explore this volitional factor of cortical effort.

While Howard Gardner’s multiple intelligence scheme seems to stretch the concept too far, a unitary notion centering on neural efficiency as an operational definition seems problematic due to limited explanatory force. Despite the compelling evidence for the NE hypothesis, the semantic issue remains. There is no fact of the matter which determines the “proper” use of the concept of intelligence. Nevertheless, in the light of more recent research (from neuroimaging, especially), the opponents of NE reductionism are clearly on the defensive today.

Wm. Doe, Ph.D., February, 2017

SELECTED REFERENCES

A. Dix et al (2016). The role of fluid intelligence in analogical reasoning: How to become neurally efficient? Neurobiology of Learning and Memory, 134B, 236-247.

D.H. Schultz, M.W. Cole (2016). Higher intelligence is associated with less task-related brain network reconfiguration. J. Neurosci., 36 (33).

H.L. Swanson, M. McMurran (2017). The impact of working memory on near and far transfer measures: Is it all about fluid intelligence? Child Neuropsychology, online pub 2/1/2017.

A. C. Neubauer, A. Fink (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33; 1004-23.

PhD Pauper Blog ~ SOME DESULTORY NOTES on SUICIDE

“The thought that I might kill myself formed in my mind as coolly as a tree or flower.” – Sylvia Plath

Conflicting attitudes toward suicide can be traced at least as far back as ancient Greece, where convicted criminals could take their own lives, but in Rome, attitudes became more restrictive because of too many slaves (valuable property) doing themselves in. In general, suicide is condemned by Christianity, Judaism, and Islam,, but is tolerated by the Brahmans of India. Buddhist monks and nuns have, in social protest, set fire to themselves, and the Japanese hara-kiri and Kamikaze customs were considered acceptable in that society. After the Middle Ages, in Western countries, legal code expressed a more permissive attitude. Physician-assisted suicide, though, continues to be more condemned than condoned.

With the empirical work of sociologists, especially Emile Durkheim, suicide, in the words of one academic, “was increasingly viewed as a social ill reflecting widespread alienation, anomie, and other attitudinal byproducts of modernity.” Thus, in many European countries in the late 19th and early 20th century, the act came to be thought of as “caused by impersonal or [social] forces rather than by the agency of individuals.” Writing in the 1890s, Durkheim listed four types of suicide: altruistic, fatalistic, egoistic, and anomic. The best known is the latter, referring to the quality of “anomie”, a sense of personal normlessness, disconnection from society. Economic decline may figure into this, especially for middle-aged, male Protestants who have been laid off. Durkheim’s findings are generally considered still valid today, but psychologically-oriented researchers tend to be critical of them. This points up the conflicting orientations between the two disciplines in attempting to understand the same social phenomena.

Nearly a million suicide deaths occur worldwide each year. Associated with increased suicide rates are the global financial crisis, natural disasters, and air pollution. Risk is increased at the individual level by past self-harm, parental loss or separation, and younger age relative to classmates. Korea’s suicide rate ranks first in the world with the most favored method being hanging, followed by the distant second and third of falls and poisoning. Of the many medical conditions that confer risk for suicide, stroke is prominent. Patients frequently develop a  post-stroke depression. A recent study examining the association between suffering from stroke and subsequent risk for suicide and suicidal ideation notes that suffering from a stroke increases the risk of dying by suicide and developing suicidal ideation, particularly in young adults and women. Also, though it is a rare phenomenon, murder-suicide has been associated with a “pathological” expression of masculinity, i.e., rigid ideas of manhood fostered by a hypercompetitive, patriarchal society. A recent study of the phenomenon revealed three themes: domestic desperation; workplace injustice; and school retaliation. It is argued that murder-suicide is an extreme end-product of “failed manhood” at work, school, and/or within family family milieux. This is encapsulated by the term, “aggrieved entitlement”.

An interesting aspect of this subject is the mischaracterization by police of murders as suicides. Such is often the intent of the deceased or a perpetrator. A homicide detective has published his idea of common mistakes in suicide investigation, which boil down to an automatic presumption of suicide in a case labeled as such by a call. “All death inquiries should be conducted as homicide investigations until the facts prove differently.” Failure to apply three basic considerations can throw an investigation off the track: The presence of the weapon or means of death at the scene; injuries of wounds that are obviously self-inflicted; existence of a motive or intent on the part of the victim to take his or her own life. Pertinent detail: “Family members have been known to conceal weapons and/or suicide notes in order to collect on an insurance policy.” Also of interest are his questions about the suicide note: “Was it written by the deceased? Was it written voluntarily? Does the note indicate suicidal intent?”

Regarding this distinction, Lemoyne Snyder, in his classic book on homicide investigation, has observed that “a murderer is more likely to fire several bullets into the victim to make sure that he is dead before leaving the scene. A suicide, on the other hand, frequently shoots himself but survives the fatal wound for a considerable period of time. Therefore, other factors being equal, a period of survival following the fatal wound favors suicide as the cause of death.” Elsewhere, he notes that “occasionally a person will shoot himself twice in [the temple]. These wounds are not always immediately fatal. It is not at all uncommon for a person to live for several hours or even days after a wound of this kind, and occasionally they will even recover. One should always look with great suspicion on wounds of entrance on other parts of the head, because they are much more likely to be due to murder.” [original emphasis]

“In the psychological sciences,” notes the Stanford Encyclopedia of Philosophy, “many suicidologists view suicide not as an either/or notion, but as a gradient one, admitting of degrees based on individuals’ beliefs, strength of intentions, and attitudes. The Scale for Suicidal Ideation is perhaps the best example of this approach.” There are 19 items in the scale, for example: “Wish to live; wish to die; reasons for living/dying; deterrents to active attempt; method (specificity or planning of contemplated attempt; availability or opportunity for contemplated attempt); sense of ‘capability’; final acts in anticipation of death,” and so on. These items are calculated on a scale of 0 to 2.

“They tell us that suicide is the greatest piece of cowardice,” wrote Schopenhauer, “that suicide is wrong; when it is quite obvious that there is nothing in the world to which every man has a more unassailable title than to his own life and person.” Many people might criticize this sentiment as outrageously selfish, thinking of family situations of dependence on a suicidal individual. But at what point do these considerations become secondary to a rational self-destructive imperative? Such is the ethical ambiguity inherent in this issue.

~ Wm. Doe, Ph.D. – January 2017

SELECTED REFERENCES

“Emile Durkheim on Suicide”. http://www2.uva2wise.edu/pww8y/Soc/-Theorists/Durkheim/Suicide.html

Lemoyne Snyder (1977). Homicide Investigation (3rd Ed.). Springfield, Ill.: Charles Thomas        Publishers

M. Sinyor (2017). Global trends in suicide epidemiology. Curr Opin Psychiatry; 30(1).

M. Pompili (2012). Do stroke patients have an increased risk of developing suicide ideation        or dying by suicide? CNS Neurosci Ther, 18(9).

PhD Pauper Blog ~ DEPRESSION, MAJOR and MINOR

“Cognitive distortions are not an inevitable feature of depressive thinking nor unheard of among nondepressed people…[My theoretical writings] have sometimes implied a generality of depressive distortion and (implicitly) nondepressed accuracy that probably cannot be sustained.” – Aaron Beck, psychiatrist and founder of cognitive therapy

“The absence of comprehensive and reliable evidence for risks, perceived industrial interests of clinicians, as well as publication bias, which is well known to any author of systematic reviews, have in some quarters, eroded public faith in the drug treatment of depression and its regulation.” – K.P. Ebmeier et al (2006). “Recent developments and current controversies in depression”.  Lancet, vol. 367, Jan. 14

The holiday season seems an appropriate time to write a few words about depression, since many people have that experience around now (although it may be construed as “seasonal affective disorder”.) There are numerous theories about depression, with an ongoing conflict between those championing  the medical model vs. those favoring a more psychological approach. My view is that both have merit but that medical explanations have been oversold, especially in the period from the mid-1990s through most of the following decade, with an overemphasis on putative deficiencies of neurotransmitters, most notably serotonin. I’ll spare the reader the usual litany of DSM criteria for Major Depression, the most important being, after the mood itself, anhedonia. And of course, it is well known that the risk of suicide for those so diagnosed is high. A psychiatry textbook notes that “no single symptom is present in all depressive syndromes…[and] even a depressed mood is not universal.” (R. Waldinger (1997) Psychiatry for Medical Students, p. 103)

Dorothy Rowe, the British psychologist, is a good example of a theorist/clinician who rejects biological explanations for depression. Coming out of Personal Construct Theory, she believes that this particular form of mental distress develops when one’s world view, and by extension, sense of self, is upset. The depression is seen by Rowe as a maladaptive defense against such an attack on one’s “constructed” identity, leading to a sort of emotional paralysis. Her psychotherapeutic approach was centered on applying perpectivism to the crisis and convincing the depressed person that numerous interpretations of such life events are possible.

The biological view, however, has a certain intellectual appeal, especially when propounded by a charming expert like Eric Kandel, Columbia University psychiatrist and Nobel laureate in Medicine, who, surprisingly, also advocates psychoanalysis. (He got the prize for showing how short-term memories are chemically converted into long-term ones.) In a controversial 1998 paper entitled “A New Intellectual Framework for Psychiatry,” Kandel wrote (not addressing depression specifically, but psychopathology in general), “The future of psychoanalysis, if it is to have a future, is in the context of an empirical psychology, abetted by imaging techniques, neuroanatomical methods, and human genetics. Embedded in the sciences of human cognition, the ideas of psychoanalysis can be tested, and it is here that these ideas can have their greatest impact.” Thus, the therapeutic future belongs to cognitive neuroscience.

Then there is the school of Depressive Realism which has it that at least some depressed people, usually those with milder cases, may not be distorting anything, but may actually be more accurate in their assessments than non-depressed types who are operating under ‘positive illusions’. In 1979, researchers Lauren Alloy and Lyn Abramson published a key study on depressive and nondepressive perception, which opened up a can of nematodes vis-a-vis this concept of “cognitive distortion” of reality. Contrary to the conventional wisdom of the time that mentally healthy individuals should be accurate in their assessment of reality, the findings of Alloy and Abramson suggested just the opposite: that nondepressed subjects overestimated their degree of control, and depressed subjects exhibited accuracy across all experiments. Anticipating later work on so-called “positive illusions”, the authors summarize similar findings by stating that “taken together, these studies suggest that at times depressed people are ‘sadder but wiser’ than nondepressed people. Nondepressed people succumb to cognitive illusions that enable them to see both themselves and their environment with a rosy glow. A crucial question is whether depression itself leads people to be ‘realistic’ or whether realistic people are more vulnerable to depression than other people.”

Depression can become an outright psychosis, but this is quite rare (less than 1% of depressed people). It may also take the form, according to some experts (e.g., Hagop Akiskal and Theodore Millon), of a personality disorder. (Millon has posited colorful subtypes: Ill-humored (passive-aggressive), voguish (histrionic/narcissistic), self-derogating (dependent), morbid (masochistic), and restive (avoidant). This last is the subtype “most likely to commit suicide in order to avoid all the despair in life.” (It should be noted, though, that not all those with a depressive disorder will conform to a subtype description.) Or, it can be less severe but more chronic, as with Persistent Depressive Disorder (formerly called Dysthymia). Advocates of psychopharmacology think antidepressants (usually selective serotonin re-uptake inhibitors) are, more often than not, beneficial for treatment of most forms of depression, especially Major. Fans of Peter (“Listening to Prozac”) Kramer will sometimes rage at critics of the drugs, saying that SSRIs “saved my life”. The critics, like Peter Breggin and David Healy, will point to the evidence linking these drugs to suicide. Last year, Healy wrote, “In the 1990s…no one knew if SSRIs raised or lowered serotonin levels; they still don’t know. There was no evidence that treatment corrected anything. The role of persuading people to restore their serotonin levels to ‘normal’ fell to the newly obligatory patient representatives and patient groups.” [Emphasis added] (In case you’re wondering, Dr. Healy is not anti-drug treatment. For severe depression, though, he prescribes tricyclic antidepressants instead.)

There are also physical treatments: ECT (electroconvulsive shock therapy), transcranial magnetic stimulation, and near-infrared laser. The latter seems especially promising. For example, a 2015 study at the Massachusetts General Hospital showed that patients receiving 6 sessions of NIR treatment had significantly decreased scores on the Hamilton Depression assessment. The procedure was well tolerated, with no adverse effects.

Mild to moderate depressions may need no explanation on the biological level, but if severe cases do, what is the nature of the bodily derangement? Serotonin deficiency appears to be more an article of faith and Big Pharma propaganda than an empirically verifiable phenomenon. But there may be other possibilities (e.g., abnormal neurotransmission of chemicals not yet studied, raised cortisol or, as noted, magnetic and/or radiant energy factors). Among the possible take-homes, I will choose the importance of noting the limits of the science on this so far, and of separating it from commercially-driven presumption.

Finally, here’s hoping you are spared a Blue Christmas.

~ Wm. Doe, Ph.D., December 20, 2016

SELECTED REFERENCES

P. Cassano et al (2015). Near-infrared transcranial radiation for major depressive disorders. Psychiatry Journal. Article ID 352979.

R.A. Friedman & A.C. Leon (2007). Expanding the black  box – Depression, antidepressants, and the risk of suicide. NEJM, 356: 2343-6.

D. Healy (2015). Serotonin and depression. BMJ 350: h1771.

J. Herbert (2013). Cortisol and depression. Psychol. Med. 43(3).