Homeless Scholar Blog ~ MEMORY: NORMAL and ABNORMAL

“The growth and maintenance of new synaptic terminals make memory persist…The ability to grow new synaptic connections as a result of experience appears to have been conserved throughout evolution…The same [molecular] switch is important for many forms of implicit memory in a variety of other species…from bees to mice to people.” – Eric Kandel

There are some who worry about their memory as they get older, viewing slips of function as a possible early sign of dementia. Not to deny a degree of cognitive decline as part of normal aging, but there are forgetful young people, too, and memory slips in those who are older are often actually trivial lapses of attention. So, before getting into the neuropathology, a few words about normal memory.

The hippocampus, located deep in the medial temporal lobes of the brain (within the limbic system) is generally cited as the area most associated with the formation of new memories, as well as spatial cognition. Its shape has been likened to that of a seahorse, and there is one on each side of the brain. Short-term memory (STM) lasts only about 18 seconds (barring rehearsal). and it is sometimes a challenge to get such information into relatively permanent storage. It is to be distinguished from working memory, which is a concept referring to structures and processes involved in a sort of active attention (as opposed to the passive capacity of STM). (Long-term potentiation refers to a relatively long-lived increase in synaptic strength, the biological correlate of long-term memory (LTM). STM involves the alteration of pre-existing proteins, whereas LTM involves actual protein synthesis. Memories are not always straightforward, and there is a considerable amount of “creative restructuring” at work, which process is called “reconsolidation”. (Those who are impressed by their “flashbulb” memories sometimes doubt the force of reconsolidation, but the accuracy of such memories has been questioned by some reasearcher. Vividness does not necessarily equal accuracy. In connection with this, it is worth mentioning psychologist Elizabeth Loftus, whose research questioning the reliability of “recovered” memories of some individuals charging childhood sexual abuse is an ongoing source of controversy.)

In the early 1950s, Canadian neurosurgeon Wilder Penfield published the findings of his operations on patients on with intractable epilepsy, which had a major impact on this field. In attempting to distinguish normal from abnormal brain tissue in conscious patients under local anesthesia, Penfield stimulated various areas experimentally, and in doing so sometimes startling responses when patients stated they were experiencing vivid, long-forgotten memories from their distant pasts. Brenda Milner, a neuropsychologist working with him, was contacted by another surgeon, William Scoville, concerning an epilepsy patient whose capacity for forming new memories had been destroyed (as happened with some of Penfield’s patients.) This was Henry Molaison, better known as H.M. When Milner first met Molaison, she encountered a man who could not maintain memory of anything for more then 30 seconds. Each day, he had to be re-introduced to her as if for the first time. His recollection of most of his past was intact. She wondered what he was still capable of and gave him various tests. In one test, tracing star shapes over several tries using a mirror, he improved his skill at the task, doing as well as normal subjects. This led to her realization that a different type of memory, the neural substrates of which were located away from the hippocampus, was at work: implicit, or procedural memory, an unconscious form of it, which appears to be controlled by the basal ganglia and cerebellum. The psychiatrist-turned-neuroscientist, Eric Kandel said that Milner’s study of H.M. “stands as one of the great milestones in the history of modern neuroscience. It opened the way for the study of the two memory systems in the brain, explicit and implicit, and provided the basis for everything that came later–the study of human memory and its disorders.”

Dr. Kandel identified the molecular machinery for memory formation using the giant marine snail Aplysia as a model and focusing on some of its reflexes. This interesting creature has extra-large, actually visible, neurons, which are easily manipulated. Despite the vast difference between Homo sapiens and invertebrates, the basic synaptic function of memory has been conserved throughout evolution, as later research indicated. Kandel showed that while short-term memories involve certain changes in protein, long-term memories actually create new proteins that alter the shape and function of the synapse, the effect of which is to allow more neurotransmitter to be released. The specific protein which changes the STM to LTM is known as CREB, which is short for “cAMP response element binding”. (Some may remember cAMP from BIO 101 (or the equivalent); it stands for cyclic adenosine monophosphate, a chemical messenger important in many biological processes.) For this work, he shared the 2000 Nobel Prize for Physiology.

A note on post-traumatic stress syndrome: The mechanisms proposed to explain how PTSD can follow trauma (especially from brain injury) involve fear conditioning, memory reconstruction, and post-amnesia resolution. The fear conditioning model posits extreme autonomic nervous arousal at the time of a traumatic event, resulting in the release of stress neurochemicals causing an overconsolidation of trauma memories. In the memory reconstruction model, the patient synthesizes traumatic memories from available information, and these images may change over time. Finally, the post-amnesia resolution model states that the trauma on a psychological level is delayed until after the amnesia directly following the event. Misattribution of PTSD symptoms to strictly neurological effects can be counter-therapeutic in that it reduces people’s expectations for recovery unnecessarily.

Memory loss could be the result of either stress, psychological problems, depression, or various medical conditions, including thyroid disease, diabetes, hypertension, vitamin deficiencies, arteriosclerosis and stroke. Then of course, there are the neurological disorders: dementia (vascular, frontotemporal, Lewy body, Alzheimer’s disease), progressive supranuclear palsy, Parkinson’s disease, Huntington’s disease, and corticobasal degeneration.

Dementia in general is characterized by memory deterioration and an increasing inability to manage the personal affairs of daily life. Personality changes are the norm with psychotic symptoms often developing later. Early in Alzheimer’s, the memory impairment and other cognitive deficits tend to be mild compared to the changes in behavior. Those afflicted may do things completely out of character and yet have little understanding that they have acted in a socially inappropriated manner. After years of decline, the outcomes are weight loss, incoherence, and inability to walk or dress without assistance. Vacuoles (microscopic holes), amyloid plaques, and neurofibrillary tangles are evident in the cells upon autopsy.

Drug therapy for Alzheimer’s is basically in two categories: cholinesterase inhibitors (e.g., Aricept) and a glutamate regulator (memantine). These medications help mask the symptoms of Alzheimer’s and are sometimes used together, but they do not treat the underlying disease, merely help mask the symptoms. Cognitive rehabilitation for memory deficits has met with only limited success. According to a 2016 Cochrane review of 16 databases, in stroke patients, benefits were reported on subjective short-term measures but these did not persist. No benefits were reported in objective memory measures, mood, or daily functioning. (However, at least some of these reports may be due to poor methodological quality of the included studies.)

To end on a philosophical note, it is worth mentioning the relation of memory to identity. The memory criterion for personal identity (i.e., continuity of self through time) has been criticized as uninformative, since one can by definition remember only one’s own experiences and no one else’s. To meet this objection, some scholars have introduced the notion of “quasi-memory” (with the implication of personal identity removed), but this has not been established empirically. There are a number of other related philosophical problems, but on the phenomenological level of a person suffering from dementia, brain injury or other pathology involving loss of memory, there is often a reported sense of some loss of identity, and those patients may also need help coping with the emotional stress of such neurological self-alienation.

~ Wm. Doe, Ph.D. – May, 2017

 

The Homeless Scholar Blog ~ MEDICAL IMAGING: A NOTE on the PHYSICS

“I should like here to outline the method for electrons and protons, showing how one deduces the spin properties of the electron and then how one can infer the existence of positrons with similar spin properties and with the possibility of being annihilated in collisions with electrons.” – Paul Dirac, Nobel lecture, 1933

It may seem a bit absurd for me to discuss the physics of even one of these technologies, since whole tomes have been devoted to that aspect of each of them, but this is only meant to be a sketched appreciation of the historical background, and there will be no math beyond a reference to the conceptual. The focus is on MRI (magnetic resonance imaging), CT/CAT (computed tomography), and PET (positron emission tomography). The term, “tomography” refers to imaging by sections, cyber-slicing, so to speak, through the use of a penetrating wave of some kind. When clinical assessment and X-rays are insufficient to diagnose an illness or injury, physicians will resort to more high-powered methods, such as the ones just listed, although often just an ultrasound will suffice (for example, when pancreatitis or even pancreatic cancer is suspected). Overuse of the radiation-based scans is a problem, sometimes referred to as “defensive medicine”, one which I will return to at the end. The indication of CT and MRI overlap (especially for cancer detection), but on the whole, the former is used to visualize bone (there being virtually no water in bone), and the latter, for soft tissue evaluation. If a visualization of metabolic process seems necessary as a follow-up, PET is sometimes subsequently ordered.

Nicola Tesla discovered the Rotating Magnetic Field way back in 1882, but the specific phenomenon on which MRI is based (nuclear magnetic resonance (NMR)) was not discovered until 1937, when Isidore I. Rabi, a physicist at the Pupin Physics Laboratory in NYC, did experiments demonstrating the resonant interaction of magnetic fields and radiofrequency pulses. (“Resonance” in physics refers to a matching of frequencies; in this case, that of the radio waves and the vibrations of the protons from water molecules.) The idea for NMR actually came from an obscure Dutch physicist named Cornelius Gorter, in the previous year, but he could not demonstrate it experimentally due to a limited setup. Rabi, with his superior technological resources, was able to detect magnetic resonance in a “molecular beam” of lithium chloride. In practical terms, this meant that the structure of chemical compounds could now be identified spectroscopically. Several years later, Block and Purcell simultaneously demonstrated NMR in condensed matter (water and paraffin, respectively).

When a patient is placed in an MRI machine, a powerful magnet pulls the positively charged protons of the body’s water molecules into alignment, after which a radio wave pulse of the same frequency as the particles’ oscillation knocks them askew. When the radio frequency pulse is turned off, the protons relax and return to alignment, sending back “captured” information about the structure of which they are a part. This signal appears as a diffuse, amorphous image called k-space. To get a coherent version of this image, redundant information must be subtracted from it via a computer algorithm called the Fast Fourier Transform (Liney, 2010). The result is the remarkable detailed pictures of the brain and other bodily organs we are used to seeing in reproductions. (The Fourier transformation decomposes TIME signals into sinusoidal components of varying FREQUENCY. Thus, it uses mathematics to simply physical phenomena for technological applications.) A chemist and a physicist (Paul Lautebur and Peter Mansfield) were given the Nobel Prize for the invention of the MRI, but it was a physician, Raymond Damadian, who actually built the first NMR body scanner, which machine can be viewed in the Smithsonian.

The origins of the CT scan go back to 1917 when Austrian mathematician, Johann Radon, came up with math for a gravitational theory showing that an object can be reconstructed from the infinite set of all its projections. This became known as the Radon transform, which was first applied in radioastronomy. Later, it became applied to image reconstruction from radiological information based on the concept of the “line integral”, which refers to the sum of absorptions or attenuations along the path of the X-ray beam as it loses intensity. In a CT scan, a rotating tube sends X-rays through the patient to detector on the other side. The exit beam is integrated along a line between the X-ray source and detector. Measurement of the intensity involves a linear attenuation coefficient as a function of a given location and the effective energy. The basic measurement of CT is, then, the line integral of the linear attenuation coefficient. Without knowledge of Radon’s work, Allan Cormack, a South African physicist, developed the equations necessary for image construction, the process in which X-ray projections of a sample taken at a variety of different angles are combined, using the computer, to reconstruct the image being viewed. Godfrey Hounsfield, a British engineer, designed the apparatus that would permit image construction. Cormack and Hounsfield jointly received the Nobel Prize for their work.

As mentioned, sometimes PET scans are ordered to image functional, processes as opposed to structure. The most common use is to detect and track possible cancer metastasis. In this technology, radio-isotopes which are analogs of sugar are injected and followed by the machine. These isotopes decay, freeing positrons which collide with the electrons of the tissues of interest, whereupon both are annihilated, evanescently creating antimatter. The collision releases gamma rays, which are detected and tomographically processed to show functioning at the metabolic level. “From our theoretical picture,” wrote Dirac in 1933, we should expect an ordinary electron, with positive energy, to be able to drop into a hole and fill up this hole, the energy being liberated in the form of electromagnetic radiation. This would mean a process in which an electron and a positron in which an electron and a positron annihilate each other.” PET scans can be traced directly back to this antimatter prediction of the famous British quantum physicist, stated by him in 1928. Only four years later, Carl David Anderson of Caltech discovered the positron by studying the tracks of cosmic rays particles in a cloud chamber. (I say “only” because, for example, it took some twenty years for experimental validation of James Clerk Maxwell’s prediction of electromagnetic radiation. This was done by Heinrich Hertz who, in 1887, discovered radio waves. Interestingly, he believed that they were “of no use whatever.”)

There are eight parameters related to the radiation of CT scans, the three most important being tube voltage (mAs), peak kilovoltage (kVp), and pitch, a ratio of the patient table (gantry) travel divided by the total slice collimation width of the radiation beam. Units of measurement for radiation can be confusing because many nuclear physicists never converted to the metric system. Sieverts and Grays refer to very large amounts of radiation. Generally, in radiology journals, you will see the unit “mSv” for milliSievert or “mGy” for milliGray. The latter refers to absorption by a given tissue; the former, to the biological impact of the absorption. Sometimes, the terms “rems” and “rads” (expressing the same information as Sieverts and Grays) are still used. A typical chest X-ray is only 0.02 to 0.1 mSv, whereas a CT scan of the abdomen is usually 8-10 mSv (not counting second passes). Yet, even today, you will hear some medical personnel claim that a CT scan “is like getting a a few chest X-rays.” I have heard this myself.

Ionizing radiation is classified as a carcinogen by the FDA, and children are the most vulnerable, due to their size and remaining life span. (However, it has been shown that the risk may be twice as high as previously estimated for middle aged patients.) (Shuryak & Brenner, 2010). A 2013 study has estimated that the four million annual pediatric CT scans nationally will cause 4,870 cancers in the future. The researchers claim that over 40% of these cancers could be prevented by reducing the highest 25% of doses to the median (Miglioretti et al, 2013). One hundred milliSieverts (10 rems) is generally considered the limit of low-level radiation.

While the theme of this post is the physics of medical imaging, a related theme, as noted earlier, is that of defensive medicine, which has been defined as “departing from normal practice as a safeguard from litigation. It occurs when a medical practitioner performs treatment or procedures to avoid [the charge] of malpractice.” (Sekyar & Vyas, 2013). Such moves can pose health risks to the patient as well as increase healthcare costs. In some situations, defensive medicine is justified, but all too often, it bespeaks potentially hazardous ineptitude. In the context of medical imaging using the relatively high radiation of CT scans, the problem is obvious, especially for the developing bodies of pediatric patients. Radiation, again, is carcinogenic, and residents, especially, unsure of their diagnostic ability, may order such tests inappropriately and frequently, in lieu of exercising clinical judgment. The U.S. could benefit from adopting European referral guidelines for imaging (part of the EURATOM Basic Safety Standards Directive), which require a space on the referral sheet for labeling the imaging request in terms of grading pretest probability and documenting the urgency of imaging (Kainberger, 2017).

~ Wm. Doe, Ph.D. – April 20, 2017

SELECTED REFERENCES

P.A.M. Dirac (1933). Theory of Electrons and Positrons. Nobel lecture. https://www.nobelprize.org/nobel_prizes/physics/laureates/1933/dirac-lecture.pdf

G. Liney (2010). A Definitive Guide for Medical Professionals. London: Springer-Verlag.

I. Shuryak, D.J. Brenner (2010). Cancer risk after radiation exposure in middle age. J. Natl Cancer Inst., Nov. 3: 102(21).

D.L. Miglioretti et al. (2013). The use of CT in pediatrics and associated radiation exposure. JAMA Pediatrics, Aug. 1; 167(8).

M.S. Sekhar, N. Vyas (2013). Defensive medicine: A bane to health care. Ann Med Health Sci Res., 3(2).

F. Kainberger (2017). Defensive medicine and overutilization of imaging–an issue of radiation protection. Wien Klin Wochenschr, 129.

 

Homeless Scholar Blog ~ UNDERSTANDING PAIN

“Few things a doctor does are more important than relieving pain…pain is soul-destroying. No patient should have to endure intense pain unnecessarily. The quality of mercy is essential to the practice of medicine; here, of all places, it should not be strained.” – Marcia Angell, M.D.

Pain is such a vast, important topic in medicine, physiology, and psychology that one hesitates to essay a brief, general treatment of the subject, but such an attempt follows anyway. The topic is complicated because of self-reports with no apparent physiological basis, individual differences in subjective experience of pain, and ethical restrictions on pain research. Apart from the acute/chronic distinction, which often uses 6 weeks as  a demarcation, but actually tends to be rather arbitrary, types of pain are standardly categorized as nociceptive, neuropathic, and psychogenic. (Nociceptive includes somatic and visceral, sometimes a difficult distinction to make. In general, though, these can  be distinguished: unlike somatic pain, visceral pain is vague, diffuse,, and poorly defined and located. Symptoms of the latter may include nausea, vomiting, and changes in vital signs. It is usually perceived in the abdominal midline or chest.) Nociceptive is the most common type and is caused by the detection of noxious or potentially harmful stimuli by the mechanical, chemical, or thermal pain detectors. The neuropathic type is associated with nerve damage following infection or injury, rather than stimulation of pain receptors. (Nerve injury leads to incorrect signals being sent to the brain, confusing it as to the location of the problem. An example would be phantom pain following limb amputation. Neuropathic pain is also thought to be involved in reactions to chemotherapy, alcohol, surgery, HIV infection, and shingles. With shingles, for example, a subtype of neuropathic pain called allodynia often comes into play; that is, pain that happens from simple contact that is not normally painful, such as clothes touching the skin. This is to be distinguished from hyperalgesia, a hypersensitivity to mildly painful events. Less knowledgeable clinicians sometimes dismiss neuropathic pain as psychogenic.) The psychogenic type is caused, increased, or prolonged by psychological factors.

There are numerous theories of pain, none of which completely accounts for all aspects of pain perception. Most notable among them are: Specificity, Intensity, Pattern, and Gate Control. The specificity theory hold that specific pain receptors transmit signals to a “pain center” in the brain. While true, this doesn’t account for psychological factors affecting pain perception. Pattern theory posits that somatic sense organs respond to a dynamic range of stimulus intensities. A pattern of neuronal activity encodes information about the stimulus.Intensity theory denies distinct pathways but states rather that the number of impulses determines the intensity. The gate control theory is the most widely accepted of the four. Proposed in 1965, this theory states there is a control system in the dorsal horn of the spinal cord that determines which information passes into the brain. This function is controlled by the substantia gelatinosa (SG), which opens the “gate”, causing transmitter cless to fire. Non-painful stimuli will excite the SG to close the gate, resulting in a reduced pain signal. (There are some problems with the classic gate control theory, though. For example, it leaves out of the modulatory system model what we now know to be relevant descending small-fiber projections from the brainstem.) Recent neuroimaging data suggests that brain functions may not be modular, but rather, are likely to involve networks. Moreover, in chronic pain, neuroplasticity may occur, altering network dynamics (Baliki, 2011).

Nociception is facilitated by both neurotransmitters (e.g., glutamate, serotonin) and neuropeptides, which class compromises substance P, ACTH, somatostatin, and many others, including the enkephalins. Substance P is especially active in pain transmission (the “P” standing not for “pain”, but “preparation”). It is a peptide containing 11 amino acids which bind to so-called NK-1 receptors located in the nociceptive neurons of the dorsal horn of the spinal cord. Substance P also occurs in the brain, where, in addition to pain, it is also associated with nausea, respiration rate, anxiety, and neurogenesis. Unlike glutamate, substance P has been associated with relatively slow excitatory connections, and hence with chronic pain sensations transmitted by strands known as C-fibers. In addition, through initiation of expression of chemical messengers called cytokines, substance P may play a part in neurogenic inflammation, as a response to certain types of infection or injury.

Because there may be a significant subjective component to chronic pain, much research has been devoted to the psychological aspects of the complaint. People have active or passive pain coping styles based on implicit theories of pain. If the pain is seen as malleable, this involves an incremental theory with active coping strategies. Patients who feel helpless may catastrophize, amplify the sense of pain , thereby making the situation worse than it needs to be. Research findings from this area, a group of medical psychologists (Higgins, 2011) write, “may represent an underlying social-cognitive mechanism linked to important coping, emotional, and expressive reactions to chronic pain.”

“Pain catastrophizing [PA],” another group of researchers note (Quartana, 2009), “is characterized by the tendency to magnify the threat value of a pain stimulus and to feel helpless in the context of pain, and by a relative inability to inhibit pain-related thoughts in anticipation of, during, or following a painful account. Assessment instruments employed are usually the Coping Strategies Questionnaire and the Pain Catastrophizing Scale. These authors have criticized the literature on the subject for poorly established validity and reliability. In addition to problems with assessment, there is the issue of the construct of PA itself, which often substantially overlaps with other concepts (e.g., pain-related anxiety and somatization.) The authors advance an integrative model which “explicitly notes and emphasizes possible inter-relationships between theoretical mediators and moderators of catastrophizing’s effects on pain and pain-related outcomes, such as disability and social networks.” Relevant to this model will surely be carefully designed studies investigating not only fear and anxiety, but also the sense of helplessness in the wake of pain.

Returning to the opening thought, it should be added that the propensity in American medicine to withhold opioid drugs for pain is a cultural form of deleterious stupidity that should be overcome. The risk of addiction is markedly offset by the aggregate of suffering due to insufficient medication, one outcome of which may be suicide (Hassett, 2014).

~ Wm. Doe, Ph.D., March 20, 2017

SELECTED REFERENCES

M.N. Baliki et al (2011). The cortical rhythms of chronic back pain. J. Neurosci., 31.

N.C. Higgins et al (2015). Coping styles, pain expressiveness, and implicit theories of chronic pain. J. of Psychology, 149 (7).

P. J. Quartana et al (2009). Pain catastrophizing: a critical review. Expert Rev. Neurother., 9 (5).

A.L. Hassett, M.A. Ilgen (2014). The risk of suicide mortality in chronic pain patients. Curr. Pain Headache Res.. 18.

 

 

The Homeless Scholar Blog: NEURAL EFFICIENCY and the IDEA of INTELLIGENCE

“Ultimately, it would certainly be desirable to have an algorithm for the selection of an intelligence, such that any trained researcher could determine whether a candidate’s intelligence met the appropriate criteria. At present, however, it must be admitted that the selection (or rejection) of a candidate’s intelligence is reminiscent more of an artistic judgment than of a scientific assessment.” – Howard Gardner, educational psychologist

A while back, I wrote a post on general intelligence, noting that experts disagree as to what it actually is and emphasizing a broad conceptual analysis of the idea. Yet, for some reason, I kept having nagging doubts: “Why not keep this simple? An intelligent person is simply one with greater neural efficiency. Measure that factor correctly and you will have a quantification of this concept.” Perhaps a more efficient nervous system is more “fluid”, more adaptable, and thus more fit and functional in general. Raymond Cattell had two divisions of intelligence: Fluid and Crystallized, the latter being our acquired knowledge and skills. But the former is about a more “basic” sort of intelligence, how we process information without relying on a storehouse of previously acquired knowledge , but instead using abstract reasoning, pattern recognition, and general problem solving ability. Thrown into an unfamiliar situation with a requirement to think fast, all that stuff we’ve picked up over the years might avail us little or nothing. As Cattell put it, “It is apparent that one of these powers…has the ‘fluid’ quality of being directable to almost any problem.” Fluid intelligence (FI) is measured using a non-verbal, multiple choice picture completion test called the Raven Progressive Matrices, which focuses on detecting relationships among images.(Also used are the Cattell Culture Fair Test and the performance subscale of the Wechsler Adult Intelligence Scale (WAIS).

Peter Schonemann, who studied under Cattell, stated that the g factor (the letter standing for general/basic smarts) does not exist, and that those who emphasized it, like Arthur Jensen, were distorting the original findings of Charles Spearman with the effect, intentional or not, of putting racial minorities at a disadvantage in terms of policy decisions based on pro-hereditarian research. Canadian psychologist, Keith Stanovich argues that IQ tests, or their proxies (e.g., the SAT), do not effectively measure cognitive functioning, because they fail to assess real-world skills of judgement and decision-making. But he does not say the tests should be abandoned, just revised to encompass measurement of the aforementioned skills. Going back to the basic semantic question, Australian educational psychologist, R. W. Howard breaks intelligence down into three categories: Spearman’s g; a property of behavior; and a set of abilities. ‘Each concept,’ he writes, ‘contains different information, refers to a different category, and should be used in different ways…A concept is never right or wrong, but is only more or less useful.’ Matching a given use of the term, ‘intelligence’, with its appropriate category would eliminate much confusion. Furthermore, recent research on IQ testing suggests that at least part of what is being measured is motivation.

Brighter individuals display lower brain energy expenditure while performing cognitive tasks: this is the neural efficiency hypothesis in a nutshell. Such expenditure can be measured using PET scans with cortical glucose metabolic rate used as a correlate of abstract reasoning and attention. Considering “reconfiguration” to reflect said expenditure, recent research has found that “brain network configuration at rest was always closer to a wide variety of task configurations in intelligent individuals. This suggests that the ability to modify network connectivity efficiently when task demands change is a hallmark of high intelligence (Shultz and Cole, 2016). Dix et al (2016) have noted that high fluid intelligence and learning cause fast, accurate analogical reasoning. Yet,”for low FI, initially low cortical activity points to mental overload in hard tasks [and that] learning-related activity increases might reflect an overcoming of mental overload.” Swanson & McMurran (2017) conclude from a randomized control study that “improvement in working memory, as well as the positive transfer in learning outcomes, are moderated by fluid intelligence.” On the other hand, Neubauer and Fink (2009) note that, as opposed to moderate-difficulty tasks, when the more able individuals have to deal with very complex ones, they will invest more cognitive resources. “It is not clear,” write the authors, “if this reversal of the brain activation-intelligence relationship is simply due to brighter individuals’ volitional decision to invest more effort as compared to the less able ones, who might have simply ‘given up’ as they experience that the task surpasses their ability.” It is concluded that new study designs are necessary to explore this volitional factor of cortical effort.

While Howard Gardner’s multiple intelligence scheme seems to stretch the concept too far, a unitary notion centering on neural efficiency as an operational definition seems problematic due to limited explanatory force. Despite the compelling evidence for the NE hypothesis, the semantic issue remains. There is no fact of the matter which determines the “proper” use of the concept of intelligence. Nevertheless, in the light of more recent research (from neuroimaging, especially), the opponents of NE reductionism are clearly on the defensive today.

Wm. Doe, Ph.D., February, 2017

SELECTED REFERENCES

A. Dix et al (2016). The role of fluid intelligence in analogical reasoning: How to become neurally efficient? Neurobiology of Learning and Memory, 134B, 236-247.

D.H. Schultz, M.W. Cole (2016). Higher intelligence is associated with less task-related brain network reconfiguration. J. Neurosci., 36 (33).

H.L. Swanson, M. McMurran (2017). The impact of working memory on near and far transfer measures: Is it all about fluid intelligence? Child Neuropsychology, online pub 2/1/2017.

A. C. Neubauer, A. Fink (2009). Intelligence and neural efficiency. Neuroscience and Biobehavioral Reviews, 33; 1004-23.

PhD Pauper Blog ~ SOME DESULTORY NOTES on SUICIDE

“The thought that I might kill myself formed in my mind as coolly as a tree or flower.” – Sylvia Plath

Conflicting attitudes toward suicide can be traced at least as far back as ancient Greece, where convicted criminals could take their own lives, but in Rome, attitudes became more restrictive because of too many slaves (valuable property) doing themselves in. In general, suicide is condemned by Christianity, Judaism, and Islam,, but is tolerated by the Brahmans of India. Buddhist monks and nuns have, in social protest, set fire to themselves, and the Japanese hara-kiri and Kamikaze customs were considered acceptable in that society. After the Middle Ages, in Western countries, legal code expressed a more permissive attitude. Physician-assisted suicide, though, continues to be more condemned than condoned.

With the empirical work of sociologists, especially Emile Durkheim, suicide, in the words of one academic, “was increasingly viewed as a social ill reflecting widespread alienation, anomie, and other attitudinal byproducts of modernity.” Thus, in many European countries in the late 19th and early 20th century, the act came to be thought of as “caused by impersonal or [social] forces rather than by the agency of individuals.” Writing in the 1890s, Durkheim listed four types of suicide: altruistic, fatalistic, egoistic, and anomic. The best known is the latter, referring to the quality of “anomie”, a sense of personal normlessness, disconnection from society. Economic decline may figure into this, especially for middle-aged, male Protestants who have been laid off. Durkheim’s findings are generally considered still valid today, but psychologically-oriented researchers tend to be critical of them. This points up the conflicting orientations between the two disciplines in attempting to understand the same social phenomena.

Nearly a million suicide deaths occur worldwide each year. Associated with increased suicide rates are the global financial crisis, natural disasters, and air pollution. Risk is increased at the individual level by past self-harm, parental loss or separation, and younger age relative to classmates. Korea’s suicide rate ranks first in the world with the most favored method being hanging, followed by the distant second and third of falls and poisoning. Of the many medical conditions that confer risk for suicide, stroke is prominent. Patients frequently develop a  post-stroke depression. A recent study examining the association between suffering from stroke and subsequent risk for suicide and suicidal ideation notes that suffering from a stroke increases the risk of dying by suicide and developing suicidal ideation, particularly in young adults and women. Also, though it is a rare phenomenon, murder-suicide has been associated with a “pathological” expression of masculinity, i.e., rigid ideas of manhood fostered by a hypercompetitive, patriarchal society. A recent study of the phenomenon revealed three themes: domestic desperation; workplace injustice; and school retaliation. It is argued that murder-suicide is an extreme end-product of “failed manhood” at work, school, and/or within family family milieux. This is encapsulated by the term, “aggrieved entitlement”.

An interesting aspect of this subject is the mischaracterization by police of murders as suicides. Such is often the intent of the deceased or a perpetrator. A homicide detective has published his idea of common mistakes in suicide investigation, which boil down to an automatic presumption of suicide in a case labeled as such by a call. “All death inquiries should be conducted as homicide investigations until the facts prove differently.” Failure to apply three basic considerations can throw an investigation off the track: The presence of the weapon or means of death at the scene; injuries of wounds that are obviously self-inflicted; existence of a motive or intent on the part of the victim to take his or her own life. Pertinent detail: “Family members have been known to conceal weapons and/or suicide notes in order to collect on an insurance policy.” Also of interest are his questions about the suicide note: “Was it written by the deceased? Was it written voluntarily? Does the note indicate suicidal intent?”

Regarding this distinction, Lemoyne Snyder, in his classic book on homicide investigation, has observed that “a murderer is more likely to fire several bullets into the victim to make sure that he is dead before leaving the scene. A suicide, on the other hand, frequently shoots himself but survives the fatal wound for a considerable period of time. Therefore, other factors being equal, a period of survival following the fatal wound favors suicide as the cause of death.” Elsewhere, he notes that “occasionally a person will shoot himself twice in [the temple]. These wounds are not always immediately fatal. It is not at all uncommon for a person to live for several hours or even days after a wound of this kind, and occasionally they will even recover. One should always look with great suspicion on wounds of entrance on other parts of the head, because they are much more likely to be due to murder.” [original emphasis]

“In the psychological sciences,” notes the Stanford Encyclopedia of Philosophy, “many suicidologists view suicide not as an either/or notion, but as a gradient one, admitting of degrees based on individuals’ beliefs, strength of intentions, and attitudes. The Scale for Suicidal Ideation is perhaps the best example of this approach.” There are 19 items in the scale, for example: “Wish to live; wish to die; reasons for living/dying; deterrents to active attempt; method (specificity or planning of contemplated attempt; availability or opportunity for contemplated attempt); sense of ‘capability’; final acts in anticipation of death,” and so on. These items are calculated on a scale of 0 to 2.

“They tell us that suicide is the greatest piece of cowardice,” wrote Schopenhauer, “that suicide is wrong; when it is quite obvious that there is nothing in the world to which every man has a more unassailable title than to his own life and person.” Many people might criticize this sentiment as outrageously selfish, thinking of family situations of dependence on a suicidal individual. But at what point do these considerations become secondary to a rational self-destructive imperative? Such is the ethical ambiguity inherent in this issue.

~ Wm. Doe, Ph.D. – January 2017

SELECTED REFERENCES

“Emile Durkheim on Suicide”. http://www2.uva2wise.edu/pww8y/Soc/-Theorists/Durkheim/Suicide.html

Lemoyne Snyder (1977). Homicide Investigation (3rd Ed.). Springfield, Ill.: Charles Thomas        Publishers

M. Sinyor (2017). Global trends in suicide epidemiology. Curr Opin Psychiatry; 30(1).

M. Pompili (2012). Do stroke patients have an increased risk of developing suicide ideation        or dying by suicide? CNS Neurosci Ther, 18(9).

PhD Pauper Blog ~ DEPRESSION, MAJOR and MINOR

“Cognitive distortions are not an inevitable feature of depressive thinking nor unheard of among nondepressed people…[My theoretical writings] have sometimes implied a generality of depressive distortion and (implicitly) nondepressed accuracy that probably cannot be sustained.” – Aaron Beck, psychiatrist and founder of cognitive therapy

“The absence of comprehensive and reliable evidence for risks, perceived industrial interests of clinicians, as well as publication bias, which is well known to any author of systematic reviews, have in some quarters, eroded public faith in the drug treatment of depression and its regulation.” – K.P. Ebmeier et al (2006). “Recent developments and current controversies in depression”.  Lancet, vol. 367, Jan. 14

The holiday season seems an appropriate time to write a few words about depression, since many people have that experience around now (although it may be construed as “seasonal affective disorder”.) There are numerous theories about depression, with an ongoing conflict between those championing  the medical model vs. those favoring a more psychological approach. My view is that both have merit but that medical explanations have been oversold, especially in the period from the mid-1990s through most of the following decade, with an overemphasis on putative deficiencies of neurotransmitters, most notably serotonin. I’ll spare the reader the usual litany of DSM criteria for Major Depression, the most important being, after the mood itself, anhedonia. And of course, it is well known that the risk of suicide for those so diagnosed is high. A psychiatry textbook notes that “no single symptom is present in all depressive syndromes…[and] even a depressed mood is not universal.” (R. Waldinger (1997) Psychiatry for Medical Students, p. 103)

Dorothy Rowe, the British psychologist, is a good example of a theorist/clinician who rejects biological explanations for depression. Coming out of Personal Construct Theory, she believes that this particular form of mental distress develops when one’s world view, and by extension, sense of self, is upset. The depression is seen by Rowe as a maladaptive defense against such an attack on one’s “constructed” identity, leading to a sort of emotional paralysis. Her psychotherapeutic approach was centered on applying perpectivism to the crisis and convincing the depressed person that numerous interpretations of such life events are possible.

The biological view, however, has a certain intellectual appeal, especially when propounded by a charming expert like Eric Kandel, Columbia University psychiatrist and Nobel laureate in Medicine, who, surprisingly, also advocates psychoanalysis. (He got the prize for showing how short-term memories are chemically converted into long-term ones.) In a controversial 1998 paper entitled “A New Intellectual Framework for Psychiatry,” Kandel wrote (not addressing depression specifically, but psychopathology in general), “The future of psychoanalysis, if it is to have a future, is in the context of an empirical psychology, abetted by imaging techniques, neuroanatomical methods, and human genetics. Embedded in the sciences of human cognition, the ideas of psychoanalysis can be tested, and it is here that these ideas can have their greatest impact.” Thus, the therapeutic future belongs to cognitive neuroscience.

Then there is the school of Depressive Realism which has it that at least some depressed people, usually those with milder cases, may not be distorting anything, but may actually be more accurate in their assessments than non-depressed types who are operating under ‘positive illusions’. In 1979, researchers Lauren Alloy and Lyn Abramson published a key study on depressive and nondepressive perception, which opened up a can of nematodes vis-a-vis this concept of “cognitive distortion” of reality. Contrary to the conventional wisdom of the time that mentally healthy individuals should be accurate in their assessment of reality, the findings of Alloy and Abramson suggested just the opposite: that nondepressed subjects overestimated their degree of control, and depressed subjects exhibited accuracy across all experiments. Anticipating later work on so-called “positive illusions”, the authors summarize similar findings by stating that “taken together, these studies suggest that at times depressed people are ‘sadder but wiser’ than nondepressed people. Nondepressed people succumb to cognitive illusions that enable them to see both themselves and their environment with a rosy glow. A crucial question is whether depression itself leads people to be ‘realistic’ or whether realistic people are more vulnerable to depression than other people.”

Depression can become an outright psychosis, but this is quite rare (less than 1% of depressed people). It may also take the form, according to some experts (e.g., Hagop Akiskal and Theodore Millon), of a personality disorder. (Millon has posited colorful subtypes: Ill-humored (passive-aggressive), voguish (histrionic/narcissistic), self-derogating (dependent), morbid (masochistic), and restive (avoidant). This last is the subtype “most likely to commit suicide in order to avoid all the despair in life.” (It should be noted, though, that not all those with a depressive disorder will conform to a subtype description.) Or, it can be less severe but more chronic, as with Persistent Depressive Disorder (formerly called Dysthymia). Advocates of psychopharmacology think antidepressants (usually selective serotonin re-uptake inhibitors) are, more often than not, beneficial for treatment of most forms of depression, especially Major. Fans of Peter (“Listening to Prozac”) Kramer will sometimes rage at critics of the drugs, saying that SSRIs “saved my life”. The critics, like Peter Breggin and David Healy, will point to the evidence linking these drugs to suicide. Last year, Healy wrote, “In the 1990s…no one knew if SSRIs raised or lowered serotonin levels; they still don’t know. There was no evidence that treatment corrected anything. The role of persuading people to restore their serotonin levels to ‘normal’ fell to the newly obligatory patient representatives and patient groups.” [Emphasis added] (In case you’re wondering, Dr. Healy is not anti-drug treatment. For severe depression, though, he prescribes tricyclic antidepressants instead.)

There are also physical treatments: ECT (electroconvulsive shock therapy), transcranial magnetic stimulation, and near-infrared laser. The latter seems especially promising. For example, a 2015 study at the Massachusetts General Hospital showed that patients receiving 6 sessions of NIR treatment had significantly decreased scores on the Hamilton Depression assessment. The procedure was well tolerated, with no adverse effects.

Mild to moderate depressions may need no explanation on the biological level, but if severe cases do, what is the nature of the bodily derangement? Serotonin deficiency appears to be more an article of faith and Big Pharma propaganda than an empirically verifiable phenomenon. But there may be other possibilities (e.g., abnormal neurotransmission of chemicals not yet studied, raised cortisol or, as noted, magnetic and/or radiant energy factors). Among the possible take-homes, I will choose the importance of noting the limits of the science on this so far, and of separating it from commercially-driven presumption.

Finally, here’s hoping you are spared a Blue Christmas.

~ Wm. Doe, Ph.D., December 20, 2016

SELECTED REFERENCES

P. Cassano et al (2015). Near-infrared transcranial radiation for major depressive disorders. Psychiatry Journal. Article ID 352979.

R.A. Friedman & A.C. Leon (2007). Expanding the black  box – Depression, antidepressants, and the risk of suicide. NEJM, 356: 2343-6.

D. Healy (2015). Serotonin and depression. BMJ 350: h1771.

J. Herbert (2013). Cortisol and depression. Psychol. Med. 43(3).

 

 

 

 

PhD Pauper Blog ~ INTERNAL CHAOS: The BORDERLINE PERSONALITY DISORDER

“Borderline Personality Disorder – dreadful/meaningless term we tried to change in DSM-IV, but no consensus substitute.” – Allen Frances, MD, head of the DSM-IV task force, Twitter, 30 March 2016

“Expressively spasmodic, interpersonally unpredictable, cognitively capricious” – this is a partial description of the aforementioned syndrome (BPD) by prominent personality psychologist, Theodore Millon, the first descriptor comprising recurrent suicidal and self-mutilating behavior. According to the DSM-5, “The essential feature of BPD is a pervasive pattern of instability of interpersonal relationships, self-image, and affects, and marked impulsivity that begins by early adulthood and is present in a variety of contexts.” Among the current diagnostic criteria are inappropriate, intense anger; frantic efforts to avoid real or imagined abandonment; identity disturbances; chronic feelings of emptiness; and recurrent suicidal or self-mutilating behavior. This construct was originally proposed by Otto Kernberg in the 1960s and later developed by Robert Spitzer during the task force planning for DSM-III in 1980, which caused some confusion since the adjective was often applied to mild schizophrenia. The new denotation was meant to refer to a condition that is neither psychotic nor neurotic, although some clinicians have noted transient psychotic breaks in such patients. “The borderline patient,” wrote one psychiatrist in a textbook for medical students, “is easily overwhelmed by anger and frustration and generally acts impulsively when feeling become intolerable. This usually involves repeated self-destructive acts (e.g., wrist-slashing, overdoses, car crashes) and may also include drug abuse, sexual promiscuity, and abrupt changes in job and living situation (Waldinger, 1997). It is estimated that 2% of adults in the U.S. are diagnosed with the disorder (NAMI, 2008).

A recent study has posited three subtypes for the syndrome; affect dysregulation; rejection sensitivity; and so-called mentalization failure (predictive of PTSD),which refers to an inability to make sense of one’s own and others’ mental states (Lewis & Caputi, 2012). Millon identified four different subtypes: Discouraged (codependent, with anger close to the surface); Impulsive; Petulant (ambivalent, defiant, resentful, unpredictable); and Self-destructive. A factor analysis by Sanislow (2000) came up with three factors: again, the affect dysregulation (affective instability, inappropriate anger, and efforts to avoid abandonment); disturbed relatedness; and behavioral dysregulation. As is evident from the above, BPD is rather controversial among experts. In a 2007 study, Rebekah Bradley and colleagues identified three problems with the diagnosis, centering around heterogeneity, categorical quality, and overlap with other PDs. Regarding heterogeneity, the authors write, ” Two patients may both be diagnosed with it while sharing only one symptom in common. This has important clinical implications because subtypes of BPD seem to exist that do not reflect random variation among criteria but rather meaningful, patterned heterogeneity, such as internalizing and externalizing subtypes.” As to the categorical point, personality research consistently supports dimensionality as a more reliable way to conceptualize personality disorders than to see them as discrete categories. Finally, the overlap between borderline and other PDs is so extensive that it tends to undermine the validity of the former as a clearly defined taxon.

The therapeutic modality most often associated with BPD is dialectical behavior therapy (DBT), a modification of cognitive-behavioral, which was developed by University of Washington psychologist, Marsha Linehan. DBT includes components of biosociality, dialectical social philosophy, and Zen Buddhist psychology. “The biosocial theory of BPD,” writes Linehan and colleagues, “asserts the client’s emotional and behavioral dysregulation are elicited and reinforced by the transaction between an invalidating rearing environment and a biological tendency toward emotional vulnerability. (Lynch, 2007, p. 183). In dialectical terms, a behavior, such as self-injury, can be both functional and dysfunctional; functional as a short-term stress reducer, but dysfunctional because of the obvious negative effects, including suicide risk. The synthesis involves “validating the need to relieve distress while helping the client utilize skills that function to reduce stress.” This dialectical approach is deemed to be consonant with the philosophy of Zen. The core problem in BPD is thought to be affect dysregulation, and the goal of therapy is to increase functional behavior even in the presence of intense negative emotion. J.F. Clarkin of the Personality Disorders Institute at Cornell writes that “results from…naturalistic follow-up of patients in DBT showed variable maintenance of treatment effects and ongoing impairment of functioning in patients who initially experienced symptom relief.” (Clarkin et al, 2004, p.54)

Psychological treatment has been shown to be more effective than any pharmacological intervention, but concerns about cost have limited use of psychotherapy. Also, the chronicity of the disease demands studies in long-term effects, and it is uncertain that DBT, despite its successes, can deliver such effects as well as a psychodynamic approach, especially transference-focused psychotherapy (TFP), which is based on the idea of object relations which unlike the Freudian insistence on pleasure-seeking, asserts that humans are primarily relationship-seeking. Object relations, according to psychologist, Thomas Klee, refers to “the self-structure we internalize in early childhood which functions as a blueprint for establishing future relationships.” Unlike a focus on behavioral regulation per se, TFP focuses on these self-structures which in BPD have been pathologically damaged in early development and require a sort of “mental archaeology” to resolve self-destructive patterns engendered by traumatic relational internalizations.

As for a substitute name, a commenter on Dr. Frances’ Twitter thread suggested Complex Trauma Disorder, which is as good as any other I’ve heard.

~ Wm. Doe, PhD – November 21, 2016

SELECTED REFERENCES

T. Millon (1995). Disorders of Personality: DSM-IV and Beyond (New York: Wiley)

R. Bradley, C.Z. Conklin, D. Westen (2007). Borderline Personality Disorder. In Personality Disorders: Toward the DSM-V. (Thousand Oaks, CA: Sage Publications)

K.L. Lewis, P. Caputi (2012).Borderline personality disorder subtypes: A factor analysis of the DSM-IV criteria. Personality and Mental Health, 6(3).

C.A. Sanislow et al. (2000). Factor analysis of the DSM-III-R criteria for borderline personality disorder in psychiatric inpatients. Am. J. Psychiatry, 157(10).

T.R. Lynch et al (2007). Dialectical behavioral therapy for borderline personality disorder. Annu. Rev. Clin. Psychol; 3: 181-205.

J.F. Clarkin, O.F. Kernberg et al (2004). The Personality Disorders Insitute randomized clinical trial for borderline personality disorder: rationale, methods, and patient characteristics. J. Pers. Disord. 18(1): 52-72.

PhD Pauper Blog ~ FEELING FAINT: DYSAUTONOMIA and DIAGNOSTIC COMPLEXITY

Fainting, or syncope, may or may not be serious. Sometimes idiopathic, its causes are numerous, including cardiac arrhythmia, hypoglycemia, otthostatic hypotension, and anemia, as well as stress, heat, pain, and dehydration. Fainting accounts for 6% of hospital admissions, but again, it can occur in otherwise healthy individuals. Lightheadedness, per se, is technically termed presyncope. Both fainting and feeling faint may also be signs of a disorder of the autonomic nervous system known as dysautonomia.

An umbrella term for a group of diseases, dysautonomia, comprises such exotic sounding phenomena as autonomic neuropathy, multiple system atrophy, autonomic failure, postural orthostatic tachycardia syndrome, and the most common, neurocardiogenic syncope. The autonomic nervous system (ANS) is part of the peripheral (as opposed to central) nervous system and regulates involuntary body functions: heart, glands, smooth muscles. The sympathetic division of the ANS speeds up heart rate, raises blood pressure, and constricts blood vessels, while the parasympathetic division is geared for rest and digestion, slowing the heart rate and increasing peristalsis. (Fun fact: the parasympathetic controls erection; the sympathetic, ejaculation.) A glance at a diagram of the ANS will how extensive it is, subserving eyes, salivary glands, trachea, and all organ-points south, including bladder and rectum. Naturally, an autonomic disorder will manifest with more than fainting. Some of the other symptoms are blurred, tunnel, or double vision; dry mouth; difficulty swallowing; rapid heart rate: impotence; incontinence; and constipation. Exercise intolerance may be neither a sign of early heart failure or COPD nor mere deconditioning, but rather an indication of some kind of autonomic dysfunction. Emphasis is often placed on the heriditary, so-called primary, form, but there is also an acquired, secondary dysautonomia, caused by many disorders from Parkinson’s disease, diabetes, multiple sclerosis and AIDS to Lyme disease, spinal cord injury, chronic alcohol misuse, and surgery or injury involving the nerves.

Some cases of vertigo have been linked to underlying autonomic dysfunction. A retrospective review of 113 patients showed that vertiginous individuals who failed to improve with standard treatment had their problem alleviated by an autonomic treatment regimen.Thus, the authors concluded, there is a subgroup of spontaneous vertigo patients who also demonstrate symptoms and findings consistent with poor autonomic regulation (Pappas, 2003). A later, smaller study validated the clinical diagnosis of autonomic dizziness as a cause of chronic, nonvertiginous dizziness that may be exacerbated by physical exertion or orthostatic challenges. Lack of thorough investigation or analysis could overlook an autonomic etiology in some patients with dizziness, with or without syncope.

The following tests are used to evaluate syncope: of course, a through physical examination; ECG; heart monitors (e.g., “Holter monitor”, which records continuously for 24-48 hours); echocardiography; stress test; table tilt test; electrophysiological study for arrhythmia; and, if the situation still seems uncertain, possibly an implantable cardiac monitor.

This article’s focus on a supposedly rare disorder is really more generally about the pitfalls of medical diagnosis. Dysautonomia is often taken for something else, undiagnosed or misdiagnosed, which, of course, leads to additional and preventable suffering for the patient. It is possible that the condition underlies controversial disorders such as irritable bowel disease, fibromyalgia, and chronic fatigue syndrome. “Horses before zebras” is a good general rule, but perhaps the zebras, in this context, need more  attention than is generally given.

~ Wm. Doe, Ph.D.- October 29, 2016

SELECTED REFERENCES

D.S. Goldstein, G. Eisenhofer (2002). Dysautonomias: Clinical disorders of the autonomic          nervous system. Annals of Internal Medicine, Nov. 5.

National Dysautonomia Research Foundation (n.d.). The Autonomic Nervous System.        http://www.ndrf.org/ans.html

J.P. Staab, M.J. Ruckenstein (2007). Autonomic nervous system function in chronic dizziness. Otology & Neurotology, 28.

D.G. Pappas (2003). Autonomic related vertigo. Laryngoscope, 113 (10).

 

PhD Pauper Blog ~ INADEQUATE PERSONALITY and the CHANGING DSM

One of my favorite defunct psychiatric diagnoses is Inadequate Personality Disorder, which was characterized in the DSM-II by “ineffectual responses to emotional, intellectual, social, and physical demands despite apparent capabilities to perform in those areas.” It was removed in the controversial third edition of 1980 along with cyclothymic, explosive, and asthenic PDs. Replacements were schizotypal (formerly ‘borderline schizophrenic’), narcissistic, avoidant, dependent, and borderline (meaning, as a personality disorder, emotional instability with self-mutilating tendencies). The “diagnosis” of homosexuality had been dropped in 1973.

Probably a good thing about IPD, although an argument has been made for it by a medical blogger as a manifestation of “frontal lobe syndrome”. This last one is said to be a disturbance of the executive functions: the ability to inhibit, plan, organize, stategize, and maintain goals. Back in 1964, a psychiatrist named Edward Podolsky wrote an article linking the inadequate personality construct with alcoholism, listing 11 traits, among them “inadaptability, ineptness, poor judgment, lack of physical and emotional stamina, and social incapacity; a lack of persistence or continuity of effort or attachment; a low tenacity of purpose,” and so on. Alcohol is used to counter intolerable feelings of inadequacy and inferiority and to engender some semblance of emotional stability. Although the Inadequate Personality PD diagnosis is archaic, a contemporary clinician could, of course, bring it in through the back door by employing the useful category of “Personality Disorder Not Otherwise Specified.”

The 3rd edition of the manual was controversial–indeed, political–because of a feud between Task Force chief Robert Spitzer’s behavioral camp and the supporters of the old guard, that being a psychodynamic/neo-Freudian approach. As Spitzer and his DSM-III task force saw it, “neurosis” was a superfluous, “ideological” concept, an article of dogma stemming from Freud’s misguided theorizing. Scientific clarity and diagnostic accuracy would be removed from official guidelines. Mayes and Horwitz (2005) put it thus: “In revising the DSM [a group of primarily neo-Kraepelians] transformed the little-used mental health manual into a biblical textbook specifically designed for scientific research, reimbursement compatibility, and, by default, psychopharmacology.” (The reference is to German psychiatrist, Emil Kraepelin (1856-1926), considered by some to be the founder of psychopharmacology). And in so doing, they also delegitimized unconscious mental processes which are fundamental to psychoanalysis.

But was this reasonable? Theoretically, there are two aspects to the concept of a psychological unconscious: the cognitive and the psychodynamic. Probably, Spitzer would have permitted the more empirically grounded former as this did not really interfere with his project. (For an overview of the “cognitive unconscious”, see Kihlstrom (1987) below). But as the 20th century drew to a close, more evidence in favor of the existence of unconscious mental processes beyond the mere cognitive was being noted. As psychologist Drew Westen wrote in 1999, “The existence of unconscious cognitive processes is no longer controversial in psychology,” and furthermore, there is substantial experimental and clinical evidence for unconscious affective processes as well. Some of this research even supports a case for some Freudian ideas. For example, psychologist Howard Shevrin has been conducting neurophysiological testing of such mental activity for over half a century. In 2013, using “time frequency” event-related potential data, he published a major report on the subliminal inhibition of conscious symptoms in social phobia subjects. “These novel findings,” he wrote, “constitute neuroscientific evidence for the psychoanalytic concepts of unconscious conflict and repression.” In addition, Dutch psychiatrist Bessel van der Kolk’s studies of PTSD have demonstrated that affective fragmentation which is inaccessible to cognition (thereby pointing up a serious limitation to cognitive therapy). Psychologist Glenn Shean has noted that a depressed patient may be responding at an affective level to inner somatic and outer stimuli that have not been contextualized or organized but represent unrecognized elements of previous experience. Shean believes that psychodynamic, rather than cognitive therapy is appropriate for this population. In his book on emotional intelligence, Daniel Goleman summed up the developmental psychopathology this way: “Early childhood lessons” are stored in the amygdala as “rough, wordless blueprints for emotional life.”

In recent years, controversy has flared up again with the DSM, Allen Frances (task force head of the 4th edition) being quite public in his denunciation of the 5th revision for its medicalization of normality with bogus diagnoses which play into the greedy paws of Big Pharma. Among the harmful changes, he lists the expansion of the notion of Generalized Anxiety Disorders to include ordinary worrying; the restriction of the autism diagnosis, which might adversely affect school services: pathologizing temper tantrums as “Disruptive Mood Dysregulation Disorder”; irresponsible conceptualization of adult attention deficit disorder, which encourages the questionable prescription of stimulants;  and the first time drug users will be thought of as incipient addicts.

Actually, there probably is something to the concept of an inadequate personality, in the sense of repetitive, self-defeating, maladaptive behavior patterns, but in my readings, the psychiatrists have been silent about the macrosocial forces which may come into play here. Is it possible that at least some ineffectual individuals are having trouble coping with a hypercompetitive, irrational society, yet are not limited to the point of official disability? Should they be doomed to a life of poverty, even homelessness, if they cannot run as fast, perform as well, as those who have adapted to a highly questionable social system? A rational, humane society should provide for all of its citizens, regardless of their ability at any given time to keep a hypercapitalist machine humming along. People, as the saying goes, before profits.

~ Wm. Doe, PhD – September, 2016

SELECTED REFERENCES

Susan Keller (2014). Inadequate personality disorder.   http://www.alpf.medical.info/personality-disorders/inadequate-disorder.html

R. Hayes, A.V. Horwitz (2005). DSM-III: The revolution in the classification of mental illness. http://www.ncbi.nlm.nih.gov/pubmed/15981242

J.F. Kihlstrom (1987). The cognitive unconscious. http://www.ncbi.nlm.nih.gov/pubmed/3629249

Drew Westen (1999). The scientific status of unconscious processes. http://www.ncbi.nlm.nih.gov/pubmed/10650551

Bessel van der Kolk’s trauma website.   http://besselvanderkolk.net/

Glenn Shean (2003). Is cognitive therapy consistent with what we know about emotions? http://www.ncbi.nlm.nih.gov/pubmed/12735528

Howard Shevrin (2013). Subliminal unconscious conflict alpha power. http://www.ncbi.nlm.nih.gov/pubmed/24046743

Allen Frances – DSM5 In Distress blog.   http://www.psychologytoday.com/blog/dsm5-in-distress

PhD Pauper Blog – FRAILTY: PATHOGENESIS & METHODOLOGY

At first, it doesn’t seem that frailty would be controversial. It’s a common word that conjures up an image of a weak, fragile, usually elderly individual who is physically compromised due to advanced age, or a person of any age enfeebled by serious illness. “Frailty,” according to the Merck Manual of Geriatrics, “refers to a loss of physiologic reserve that makes a person susceptible to disability from minor stresses.” However, while physicians may agree conceptually on the character of this condition, there is disagreement about its operationalization, which leads to the postponement of its integration in the clinical setting. But before considering the methodological problem, it’s worth noting a few things about the pathogenesis.

Genetic/epigenetic and metabolic factors are potentially involved as are environmental and lifestyle stressors, and acute and chronic disease. Specifically, chronic inflammation in the form of circulating interleukin (IL)-6 has been identified as being associated with the condition in community-dwelling older adults. There is, in general, an inability to effectively regulate the body’s normal inflammatory response. C-reactive protein and tumor necrosis-alpha have also been detected in this population. In addition, there is a problem with processing glucose properly. Frail people secrete more cortisol, a hormone that over time, as with chronic inflammation, also can damage skeletal muscles and the immune system. Compromise of intermediary systems (musculoskeletal, endocrine, cardiovascular,and hematologic) lead to a frailty phenotype comprising weakness, weight loss, low activity, slow performance and exhaustion with the outcomes of falls, disability, dependency, and death.

Multiple frailty measurements exist, with varying levels of quality; there is no international standard. Among the instruments commonly used are Fried’s Phenotype; Rockwood & Mitnitski’s Index: and the Multidimensional Prognostic Index. One group of researchers recently concluded that “the recognition of sarcopenia [skeletal muscle degeneration] as the biological substrate of physical frailty, allows framing an objectively measurable condition to be implemented in standard practice.” (Landi, 2015) Elsewhere, it is concluded that “As the number of false positive values of most available tests is substantial, then frailty scores are of limited value for both screening and diagnostic purposes in daily practice.” (Pijpers, 2012)

However, despite methodological limitations, research clearly shows that recognizing frailty in the primary care setting (and the Emergency Rooms) aids in the management of people who are frail. Especially noteworthy is its being an independent marker for worse outcomes following surgery: complications, length of stay, discharge issues, and mortality. Frailty is one of the leading causes of morbidity and premature mortality in older people, and the failure to identify and note its presence may lead to a substandard level of care, including serious clinical misjudgments.

~ Wm. Doe, PhD – August 2016

Selected References

X. Chen et al (2014). Frailty syndrome: an overview. Clin Interv Aging, 9.

F. Landi, R. Calvani, et al. (2015). Sarcopenia as the biological substrate of physical frailty. Clin Geriatric Med, 31.

E. Pijpers, D.A. Cohen, et al. (2012). The frailty dilemma: review of the predictive accuracy of major frailty scores.  Eur J Internal Med, 23 (2).

Merck Manual of Geriatrics, 3rd Ed. (2000). Whitehouse Station, N.J.: Merck Research Labs.