Homeless Scholar Blog ~ COGNITIVE BIASES

In my dissertation on workplace discrimination and perceived (as opposed to documented) disabilities, I included a passing reference to a couple of well-known cognitive biases, presented as heuristics, or mental shortcuts, for decision making. First, there is the “faulty representation” that any level of impairment is interpreted as indicative of disability. The prospect of a more realistic, nuanced spectrum of impairment is not considered. Second is the “availability heuristic,” an error of facile recall; ongoing observation of the impaired employee tends to amplify the impairment’s severity in the employer’s mind .However, there are a great many more such biases; in fact, a Wikipedia article lists over two hundred! What they all have in common is a “systematic pattern of deviation from norm or rationality in judgment”.

Furthermore, these biases are often implicit, i.e., unconscious, especially in the workplace. The judging person is not motivated by conscious, emotional prejudice but is making decisions automatically and unconsciously, guided by rules of thumb informed by stereotypical assumptions about the behavior of a given group of people, in this case, people with disabilities. This point is worth emphasizing because of the prevalent association of unconscious mental phenomena with emotionalism. The general consensus among behavioral scientists is that most of implicit phenomena is largely cognitive in nature. As often as not, the implicit biases operative in employer decision making are likely to be due to “innocent mistakes” not reflecting any ill will or “animus” in the legal arena. (However, they can still be legally actionable, as in the case of Taylor v. Pathmark (1999), which found an employer at fault for the effects of his decisions rather than for his stated intentions.)

The best-known of contemporary research projects involving unconscious mental processes is the Implicit Association Test, which can be taken online. This is a computerized assessment of stereotypical associations, which according to its advocates, can demonstrate implicit bias often at odds with one’s conscious beliefs. Differential response times taken to react to psychologically significant words and images are thought to indicate prejudice, with the data suggesting, for example, that most people have a slight preference for their own race, although critics argue that the test only reflects cultural familiarity. The debate over the validity of this test has been going on for years. Some of these critics published an extensive meta-analysis of the IAT’s predictive capacity and concluded that it did not better at predicting behavior than measures of explicit bias. They scored a point, but the game is hardly over.

Another way of assessing implicit bias, specifically workplace discrimination against disabled people, is through data analysis of decisions made by the U.S. Equal Employment Opportunity Commission (EEOC) on complaints against employers. In 2011, some colleagues and I published research in the Rehabilitation Counseling Bulletin, a peer-reviewed academic journal, which compared EEOC findings for claimants with documented disabilities and those with perceived disabilities (that is, those with only minor impairments which were prejudicially (and, I would argue, often unconsciously) exaggerated by the employer who then labeled them as “disabled”. This was the basis of much hiring and firing discrimination. In terms of “legal realism,” discrimination (again, often unconscious) occurred in roughly 1 in 4 cases. Our key finding was that the merit resolution rate for perceived disability claims (i.e., “wins” for the plaintiff) proportionately exceeded those for documented disability claims by a statistically significant margin: 26.2% vs. 22.5%. (That is, at the time of this study, out of over 38,000 perceived disability allegations, over 10,100 were considered meritorious). In another study, we found the same effect occurred when the comparison was with those who only had a record of disability, but who were not currently disabled.

This is significant from a social psychological point of view. Information about these subgroups of workplace discrimination claims highlights not only only the cultural force of stigma, but also the propensity to engage in unconscious, automatic judgments, which, while they may be free of animosity, still can have deleterious consequences for the workers affected by them. In general, these findings from our research lend support to the assertion that unconscious/implicit bias persists in the workplace.

An estimated 70% of diagnostic errors are due to faulty reasoning. Common biases in the emergency department are the aggregate, anchoring, availability, confirmation, triage cueing, diagnostic momentum, premature closure biases, among others. (Premature closure, for instance, is a readiness to accept a diagnosis before it has been fully verified. Another common error is the availability heuristic, which is a tendency to judge things as more likely if they come readily to mind. Debiasing thereof would involve judging cases on their own merits rather than based on the recency of experiences.

Such biases also occur in anesthesiology. In one case cited in the recent literature, a patient presented with extreme pain after a lumbar discectomy. He was given anesthesia followed by antibiotics. Recalcitrant hypotension occurred at some point. A cardiac cause was presumed as his profile included morbid obesity and a history of hypertension, diabetes, and other cardiac risk factors. Completely overlooked was anaphylaxis to antibiotics, which was the cause of the problem. The authors opine that numerous biases were at play, including confirmation, framing, anchoring and representativeness. “It illustrates,” they conclude, “how an entire team of anesthesiologists, nurses and surgeons could miss a seemingly ‘classic’ diagnosis, despite knowledge, skill, and good intentions.”

Many of the biases seen in the ER and Anesthesiology are also seen in Medical Imaging. For example, one article states a 6-year-old girl presented with a 10-day history of abdominal pain and mild fever. Based on an ultrasound, the interpreting radiologist, who had recently given a lecture on the imaging features of teratomas, diagnosed a teratoma. The patients symptoms worsened, and a subsequent CT of the pelvic led to diagnosing ruptured appendicitis with a pelvic abscess. Debiasing of such availability errors would be to use objective data on the base rate of disease to correlate with one’s own rates of diagnosis and create a differential diagnosis.) An especially interesting one to me is the alliterative bias which “represents the influence one radiologist’s judgment can exert on the diagnostic thinking of another radiologist.” A proposed intervention is to “consider reviewing prior radiologist reports after rendering an interpretation, so as not to be influenced by the prior radiologist’s interpretation.” Another interesting one is the so-called regret bias, which refers to “overestimating the likelihood of a particular disease because of the undesirability of an adverse outcome from a failure to diagnose that disease.” Proposed intervention: “Development of…standardized reporting systems to objectively state the probability of certain disease processes based on the presence of an imaging finding.” (Perhaps this one should instead be called the “defensive medicine” bias?)

Diagnostic error rates have been estimated to be between 5 and 15%, which doesn’t sound like much until one considers all the actual human beings who have been victimized thereby. To be fair, the root cause is multifactorial; still, a large proportion of such errors have cognitive components. A mnemonic/checklist has been devised in an attempt to lower the error rate: TWED. T stands for life or limb threat involving failure to consider alternative diagnoses to rule out worst-case scenarios. W = Wrong: this refers to an overattachment to a particular diagnosis. E = Evidence: (“Do I have sufficient Evidence for or against this diagnosis?”). Finally, D = Dispositional factors (“What are the environmental and emotional Dispositional factors influencing my decision?”)

Madva (2017) cites three standard objections to debiasing; those related to (1) empirical efficacy; (2) practical feasibility; and (3) the failure to appreciate the underlying structural-institutional nature of discrimination. He replies to all these criticisms, but his response are long and involved, so I will just summarize one, that of presumed practical unfeasibility. Research has shown that debiasing can occur after hundreds of separate “trials”, but what does that mean? A trial is quite brief exposure to counterconditioning; hundreds of trials could be done on a PC with just “liking” on social media or playing a video game. Madva claims that working through the researched number of 480 trials takes only 45 minutes. But a further criticism is that there are too many biases to deal with. In that case, they can be prioritized to fit the context/goal.

Evidently, though, the issue of debiasing is likely to remain controversial for some time.

~ Rylan Dray, Ph.D., November 2019

SELECTED REFERENCES

W.R. Draper et al (2011). Workplace discrimination and the perception of disability. Rehab Counseling Bulletin, 55(1) : 29-37.

Larsen (2008). Unconsciously regarded as disabled. UCLA Law Review, 56, 451 ff.

Itri, J.N. & S.H. Patel (2017). Heuristics and cognitive error in medical imaging. AJR, 210: 1097-1105.

Stiegler, M.P. & Dhilton, A. (2014). Decision-making errors in anaesthesiology. Intl Anesth. Clinics, 52(1): 84-96.

Chew, K.S. et al (2016). A portable mnemonic to facilitate checking for cognitive errors. BMC Res Notes, 9; 445.

Madva, A. (2017). Biased against debiasing: On the role of (institutionally sponsored) self-transformation in the struggle against prejudice. Ergo 4(6).

 

 

 

 

 

 

Homeless Scholar Blog ~ MEMORY CONSOLIDATION

“The interval of a single night will greatly increase the strength of the memory…It is possible that the power of recollection undergoes a process of ripening and maturing during the time which intervenes…” – Quintilian, (c. 35 – c. 100 A.D.)

“Long-term memory formation is a major function of sleep. Based on evidence from neurophysiological and behavioral studies mainly in humans and rodents, we consider the formation of long-term memory during sleep as an active systems consolidation process that is embedded in a process of global synaptic downscaling.” _ J.G. Klinzing et al. (2019)

When a short-term memory becomes a long-term one, it is said to be consolidated. Memory consolidation is an active field of cognitive research, especially in the area of sleep studies.

Short-term memory (STM) lasts only from 15 to 30 seconds, and it is sometimes a challenge to get such information into relatively permanent storage. It is to be distinguished from working memory, which is a concept referring to structures and processes involved in a sort of active attention (as opposed to the passive capacity of STM). (Long-term potentiation, refers to a relatively long-lived increase in synaptic strength, the biological correlate of long-term memory (LTM) ). STM involves the alteration of pre-existing proteins, whereas LTM involves actual protein synthesis. Memories are not always straightforward, and there is a considerable amount of “creative restructuring” at work, which process is called “reconsolidation”. (Those who are impressed by their “flashbulb” memories sometimes doubt the force of reconsolidation, but the accuracy of such memories has been questioned by some researcher. Vividness does not necessarily equal accuracy. In connection with this, it is worth mentioning psychologist Elizabeth Loftus, whose research questioning the reliability of “recovered” memories of some individuals charging childhood sexual abuse is an ongoing source of controversy.)

Eric Kandel, a psychiatrist turned neuroscientist, identified the molecular machinery for memory formation using the giant marine snail Aplysia as a model and focusing on some of its reflexes. This interesting creature has extra-large, actually visible, neurons, which are easily manipulated. Despite the vast difference between Homo sapiens and invertebrates, the basic synaptic function of memory has been conserved throughout evolution, as later research indicated. Kandel showed that while short-term memories involve certain changes in protein, long-term memories actually create new proteins that alter the shape and function of the synapse, the effect of which is to allow more neurotransmitter to be released. The specific protein which changes the STM to LTM is known as CREB, which is short for “cAMP response element binding”. (Some may remember cAMP from BIO 101 (or the equivalent); it stands for cyclic adenosine monophosphate, a chemical messenger important in many biological processes.) For this work, he shared the 2000 Nobel Prize for Physiology.

Long-term memory is generally considered a major function of sleep. Klinzing et al (2019) note the effects of hippocampal replay, brain oscillations that regulate widespread information flow and local synaptic plasticity, and “qualitative transformations of memories during synaptic consolidation resulting in abstracted, gist-like representations.” (Rasch & Born, 2013) highlight the active nature of the process, especially the role of slow-wave sleep. The memory involved may even be “nonneuronal, i.e., immunological memories, giving rise to the idea that the offline consolidation of memory during sleep represents a principle of long-term memory formation established in quite different physiological systems.”

A PubMed title search of “memory consolidation” and “sleep” yielded 286 hits, but at least one researcher has voiced dissent. >>  Vertes (2004) denies the existence of  “compelling” evidence supporting a relationship between sleep and memory reconsolidation. “One of the strongest arguments against a role for sleep in declarative memory,” he writes, “involves the demonstration that the marked suppression of elimination of REM sleep in subjects on antidepressant drugs or with brainstem lesions produces no detrimental effects on cognition.”

There are two forms of memory consolidation: synaptic and systems. In the former, protein synthesis leads to molecular cascades, causing changes in gene expression. When synaptic transmission is strengthening, the process is called long-term potentiation, as mentioned above, and takes only minutes to hours. Systems consolidations is a very slow process that can take one to two decades. It involves the very gradual transference of memories from the hippocampal region to the neo-cortex, where they can be more securely stored. The studies of retrograde amnesia are evidence for systems consolidation: recent memories are compromised, but those from the more remote past are typically spared. The standard model of memory consolidation which states a time-limited function for the hippocampus has been challenged since the 1990s by the Multiple Trace theory which posits that the hippocampus is necessary for retrieval of remote memories.

Finally, a clinical note: Whereas anterograde amnesia refers to inability to form new memories, retrograde amnesia is a loss of access to information learned before injury or disease onset. In most cases, the two forms co-exist. Squire and Alvarez (1995) note that studies of retrograde amnesia “have led to the concept of memory consolidation, whereby medial temporal lobe structures direct the gradual establishment of memory representations in neocortex.” (As noted above, the ongoing of the significance of the hippocampus in this process has been long disputed.) Retrograde amnesia can be caused by traumatic brain injury, infections, surgery (e.g., the famous case of Henry Molaison), and nutritional deficiency. Regarding the latter, Korsakoff’s Syndrome, which is normally and correctly associated with chronic alcoholism, has a preceding cause, namely, a thiamine deficiency. This is thought to be due to a rare inborn error of metabolism involving the transketolase enzyme, which fails to properly process the B-1 vitamin. Under normal circumstances, this presents no problem, but in certain unlucky individuals a tipping point in their addiction is reached, and a life-threatening beri-beri occurs, progressing to Wernicke’s encephalopathy, and then Korsakoff’s.

~ Rylan Dray, Ph.D. – October 2019

 

 

 

SELECTED REFERENCES

Klinzing, J.G. et al. (2019). Mechanisms of systems memory consolidation during sleep. Nat Neurosci, 22(10).

Born, J., B. Rasch (2006). Sleep to remember. Neuroscientist, 12(5).

Himmer, L., Schonauer, M. et al. (2019). Rehearsal initiates systems memory consolidation, sleep makes it last. Sci Adv 24(5).

Vertes, R.P. (2004), Memory consolidation in sleep: Dream or reality? Neuron, 44

Squire, L.R. & Alvarez, P. (1995). Retrograde amnesia and memory consolidation: A Neurobiological perspective. Current Opinion in Neurobiology, 5.

Rasch, B., Born, J. (2013). About sleep’s role in memory. Physiol Rev, 93(2).

 

 

 

 

Homeless Scholar Blog ~ BUSTED in the BOOM: REMEMBERING the AMERICAN LOWER CLASS

I was startled out of soggy sleep by the bright, headlights of a big truck bringing granite down to Virginia from Indiana. It was about 4 AM, and I’d conked out on the roller platform outside a tombstone company. Because I was lying there, the driver couldn’t roll the stone into the building. And, at some point, it had begun to rain, which added to the homeless fun. I had been homeless for months, and drunk for much of that time. It’s a hard life to take sober.
This article is about poverty in America today, and having been homeless twice for fairly long periods of time has definitely deepened my insight into both the specific and general problem. (The second time I was homeless was over 20 years later. It was the last semester of my PhD program. For three years, I crashed in a friend’s garage, but I had a key. This beat the rough sleeping I did when younger.)
During my rough-sleeping period, I found myself continually worrying about the cops. (I had already been picked up on a capias – I failed to appear in court for a public drunkenness charge — and spent 10 days in the city jail.) Sometimes, I got lucky and found a spot inside a building, but then there was always the chance of being noticed and ending up with a “trespassing” charge.
When the sun would start sinking down in the evening, my heart sank right along with it.
Why, twenty years later, I became homeless as a graduate student: Although I no longer had a drinking problem (note_ addiction, mental illness, and “character defects” are not necessarily pre-conditions for ending up on the streets), I no longer had any income, either, having been laid off my job as a research assistant at the school, and so, lost my apartment. Fortunately, a friend rented a garage nearby as a roofing workshop and let me sleep there for free. (Another guy, whom I never saw, rented the other half of it to store large, old hulks of cars that he intended to renovate at some point. The downsides were that heat and cooling were unreliable, the suspension ceiling was crumbling right above my head, and there was no toilet. I had to walk down rickety wooden steps to get to an abandoned shower in which I relieved myself. Water still came through the shower head for flushing, so the stench, while bad, never became overwhelming.
Homelessness is a persistent social problem in the United States. According to the US Department of Housing and Urban Development, the official count of homeless persons on a given night is over 553,000. However, both the National Coalition for the Homeless and the New York State Poor People’s Campaign claim that the true number is over 3 million. HUD also reports that homelessness increased in 2017 for the first time in seven years.
The National Law Center on Homelessness and Poverty noted that HUD produces an undercount of the homeless because they utilize a flawed point-in-time (PIT) method, their general methodology involving continuity of care (COC) programs in inconsistent, and most of their methodologies miss unsheltered homeless people. (COCs are local planning bodies typically composed of nonprofit service providers, state and local government agencies which HUD requires to conduct a PIT count of homeless people. “COCs rely heavily on volunteers, many of whom receive as little as one hour of training.”)
Many homeless are difficult to find, especially since so many are doubling up with friends/ “couch surfing”. Also excluded are people in some institutions such as hospitals and jails.
Despite the popular emphasis on drug abuse and madness, the leading cause of the problem is that homeless people cannot afford rent. Moreover, one-eighth of the national low-income housing supply has been permanently lost since 2001. Foreclosures have claimed over 5 million homes in the last ten years. Public housing assistance is inadequate to meet the need: 3 in 4 cannot get assistance, and get put on waiting lists for years.
Not only is homelessness criminalized, but the process extends to poverty in general. The debtors prisons of the nineteenth century are long gone, but a modern form has developed, involving unethical court actions. Poor people who cannot pay fines (often cooked up by the City to fill its coffers) are sent to jail. Sometimes, whether by accident or design, the accused citizen never even receives the order to appear in court, which leads to a warrant for their arrest (capias).
The US Census Bureau tells us that poverty is actually on the decline. However, this does not apply to the “lower class,” i.e., the lower fifth of the country whose average household income has not changed in ten years. Moreover, one could ask why, in the putatively richest country in the world, about 40 million of its citizens are still poor, especially when the upper 1% enjoy markedly disproportionate wealth.
Then there is the issue of poverty measures. The official poverty level is probably too low. Current measures are based on methodologies developed over 50 years ago and are widely considered outdated. Just to afford basic expenses, families need an income of twice the federal poverty level. “The poverty threshold,” says the Economic Policy Institute, “is not only insufficient at capturing differences in the cost-of-living across the country, but it is insufficient at measuring what it truly costs to get by.” (Sept.11, 2015)
If the category of “economic insecurity” is added to the picture, close to 80% of Americans live in danger of poverty or unemployment in their lifetimes. (The current jobless figures are historically low, but the quality of the jobs is seldom discussed (low wages, part-time hours, working conditions, etc.).
We’ve all seen those wealth inequality stats, but here’s one that particularly illustrative of the problem: According to the Sept. 2017 Fed Survey of Consumer Finance, 10% of Americans now own 77% of all wealth. Trillions of dollars have been transferred from the working class to the rich in the period from 2014 to 2017. (This could also be expressed as an 80/20 split (echoing the Pareto principle); more accurately, 84% of the wealth goes the upper quintile; the often-heard 99/1 split (with 1% typically referenced by sociologists as the upper class); and the 99+/0.01% split (to zero in on the super-rich). Americans routinely underestimate the extent of U.S. income inequality. According to Pew research, 52% of Americans believe taxes should be raised on corporations, but only 43% say they should be raised on higher-income households.
This problem goes back farther than many people may realize. There were expressions of discontent about social and economic justice soon after the states won their independence from England. In 1786, Daniel Shays, a farmer and veteran of the Revolutionary war, organized other poor farmers in Massachusetts and through an armed insurrection, forced local courts to shut down. The large banking interests in Boston had been using these courts to send the farmers to debtors prison. This uprising was crushed, but it caused considerable apprehension among the well-to-do landlowners forming the ruling class of the fledgling nation. To minimize the chance of another populist democratic revolt against corporate privilege, a Constitutional Convention was convened in 1787 to protect the interests of this moneyed class…. At the end of the following century, the excesses of the robber barons led to widespread public animosity and ultimately to anti-trust legislation during the so-called Progressive Era.
 Congress passed the Sherman Antitrust Act to prohibit monopolies in 1890. Other progressive measures followed over the next three decades, but by the mid-twenties, this impulse was lost. Calvin Coolidge declared that the business of America was Business, and by the end of the decade the labor movement had been smashed. But of course, the Depression led to the New Deal and a resurgence of working class activism. The post-war boom led to general prosperity, but by the sixties, poverty was again a prominent issue, with Robert Kennedy in the news touring Appalachia. The Great Society programs formed another progressive wave, but the right wing vowed revenge for their landslide defeat in the 1964 election. By the election of Reagan fourteen years later, the conservative movement was back in full force. The political center of gravity has been moving right ever since.
One bright note in this picture is that a number of books have come out in recent years focusing on the seriousness of poverty in America: Linda Tirado’s Hand To Mouth; Peter Edelman’s book on the criminalization of poverty; Mathew Desmond’s work on eviction; and sociologist Kathryn Edin’s Two Dollars a Day. (Tirado I find especially interesting, as she is straight from the working poor. She was a short order cook who became something of an Internet celebrity after posting a long comment about poverty on Gawker, which led to a GoFundMe account and a book deal from Penguin. “Poverty,” she writes, “is isolating…You never have time to talk, never have time to hang out…You lose the most interesting parts of yourself to the demands of survival.”) Journalist Sarah Smarsh, “a fifth-generation Kansas farm kid,” wrote an essay in 2016 called “Dangerous Idiots” critical of the simplistic and insulting way the liberal media covered the white working class, as if they were all Trump supporters.
Also worth noting here is an earlier (2001), academic article by Harvard economist Alberto Alesina and colleagues, “Why the US Doesn’t Have a European-Style Welfare State”. They cite racial animosity, the lack here of proportional representation, “an 18th century constitution designed to protect property,” the limitation of the growth of a socialist party by the US political establishment and differential views of why poor people exist: in America, poverty is seen as character flaw (laziness), whereas most Europeans believe that the poor are simply unfortunate.
In the U.S., the standard liberal anti-poverty prescription is to vote for the Democratic Party, but it’s clear that between the oppositional force of Congressional conservatives and the constraining factor of corporate contributions, not much is likely to come of this for the poverty-stricken multitudes. Safety-net programs like SNAP, Medicaid, etc. must, of course, be preserved and expanded, but political will has been hitherto been lacking to meet the depth of the need. It remains to be seen [writing in Sept., 2018] if the new “socialist” candidates will make a difference. (Cynics often remark that the party is “the graveyard of social movements.”) What is probably needed is a mass movement like Occupy Wall Street tried to be and what the Poor Peoples Campaign is trying to be. Some Marxist economists argue that there are objective constraints, now in place for decades, that prevent contemporary capitalism from allowing another New Deal/Great Society-type overhaul. But if if such reformism is still possible, a more forceful push from the masses will be required. It remains underappreciated in this country that the progressive reforms which happened, such as the eight-hour day, women’s suffrage, civil rights – came about not from the moral compunction and benevolence of the “good guys” in the political class, but from the sustained militant actions of the downtrodden.
– Rylan Dray
 

Homeless Scholar Blog ~ FLUID INTELLIGENCE

Perhaps by now, one might suppose that the nature of what we call intelligence would have been settled, but no such luck. In 1904, the year prior to Alfred Binet’s publishing his famous test of verbal abilities designed to identify “mental retardation” in children. Charles Spearman conceived of a basic, general intelligence which he called g. This he saw as the quintessence of smarts, emerging from factor analysis of correlations between different tests. (Today, a more sophisticated test, the Wechsler, is the standard instrument.) Spearman’s concept was revised in 1941 by Raymond Cattell, who argued for both a “fluid” and a “crystallized” form of what was to be tested. The latter is basically facts and figures, the product of education and general environment, while the fluid type refers to a basic sort of “brain power,” the ability to think on one’s feet in novel situations This form is thought to decline with age, unlike the other. (Due to expansion by later psychologists, it is now called the Cattell-Horn-Carroll theory.) According to the APA Dictionary of Psychology, “Fluid intelligence is used in coping with new kinds of problems and situations, and crystallized intelligence is used in applying one’s cultural knowledge to problems.” “Unrelated to knowledge and experience, fluid intelligence,” says the Good Therapy website, “is the ability to think abstractly, solve problems, and recognize patterns.”

Peter Schonemann, who studied under Cattell, stated that the g factor does not exist, and that those who emphasized it, like the controversial psychologist, Arthur Jensen, were distorting the original findings of Spearman with the effect, intentional or not, of putting racial minorities at a disadvantage in terms of policy decisions based on pro-hereditarian research.

But the purpose of this post is not to rehash the controversies surrounding the general concept of intelligence, but rather to focus on one aspect of it mentioned above: fluidity.

In 2008, cognitive scientist Susanne Jaeggi and colleagues published an important paper claiming to show that fluid intelligence is trainable with memory tasks. “We present evidence,” they wrote, “for transfer from training on demanding working memory task to measures of Gf. This transfer results even though the trained task in entirely different from the intelligence test itself.” (Jaeggi et al. 2008) They employed the so-called n-back tasks. As one researcher describes it, “Participants are typically instructed to monitor a series of stimuli and to respond whenever a stimulus is presented that is the same as the one presented n trials previously.” (Meule, 2017) Thus, attention must be divided between perception and memory with the memory subtask becoming increasingly difficult.

The eminent psychologist, Robert Sternberg, wrote a rather glowing review of this research, summarizing it thus: “(i) Fluid intelligence is trainable to a significant and meaningful degree; (ii) the training is subject to dosage effects, with more training leading to greater gains; (iii) the effect occurs across the spectrum of abilities, although it is larger toward the lower end of the spectrum; and (iv) the effect can be obtained by training on problems that, at least superficially, do not resemble those on the fluid ability tests.”

However, after this initial optimism about the prospects of general/fluid intelligence being susceptible to improvement through training, skepticism set it. One group of researchers (Harrison, Shipstead (2013)) wrote that their results suggested that working memory and fluid intelligence are different hypothetical constructs and that training on the former with the goal of trying to increase the latter will likely not succeed. “More important,” they add, “this focus might cause one to miss the more realistic goal of training those specific strategies and mechanisms of the working memory system important to other aspects of real-world cognition.” In the same vein, other researchers (Melby-Lervag et al, 2013) have written, “Memory training programs appear to produce short-term, specific training effects that do not generalize…Current findings cast doubt on both the clinical relevance of working memory training programs and their utility as methods of enhancing cognitive functioning in typically developing children and healthy adults.” Still others (Redick et al, 2013) re-examined dual n-back training studies and wrote that “subjects showed no positive transfer to fluid intelligence, multitasking, WM capacity, crystallized intelligence or perceptual speed tasks,” and proffered “a pessimistic view of the effects of a dual n-back practice.” Jaeggi and company responded to the criticism in a 2016 article titled, “There is no convincing evidence that working memory training is NOT effective”, stating “Our recent meta-analysis concluded that training on working memory can improve performance on tests of fluid intelligence (Au, Jaeggi et al, 2016). Melby-Lervag had challenged this conclusion on the grounds that it did not take into consideration baseline differences on a by-study level and that the effects were primarily driven by purportedly less rigorous studies that did not include active control groups.” The details of the controversy are too involved to repeat here, but in conclusion, Jaeggi & Co. state,

“Having addressed their criticisms, we find neither the qualitative nor quantitative interpretations of our original work change. There still seems to be an overall small, but significant effect size of n-back training on improving Gf test performance. These effects cannot be easily explained as Hawthorne effects or artifacts of control group type…[Although] it is unclear to what extent an ES of g=0.24 on laboratory tests of Gf translates to real-world gains in actual intelligence, no matter how small, is of interest from a basic science perspective, if not a translational one” (Au, Jaeggi, 2016).

Moving onto another level, the best way to improve brain power, though, is probably through physical exercise. As noted by a reviewer of physician John Ratey’s best-selling book, Spark, “Aerobic exercise enhances mood by boosting the levels of [serotonin, norepinephrine, and dopamine];…[body mass index] and aerobic fitness are significant markers of academic performance; by increasing the levels of the neurotransmitters, norepinephrine and dopamine, physical exercise respectively boost the signal quality of neural transmission, decreases the noise of neural chatter, and together reduce ADHD and enhance learning…A protein, brain-derived neurotrophic factor (BDNF) builds and maintains neuron networks. BDNF facilitates the learning process and is important for…synaptic plasticity.” Moreover, “exercise prevents inflammation that triggers the plaque accumulation in the brain that causes Alzheimer’s disease.”

Finally, a presumably original notion: Can fluid intelligence be “faked”? Possibly! One could list all of the missed opportunities for its employment from personal experience [by category] (and imagined scenarios) and plan for possible future opportunities. It’s fake because of its crystallized/experiential nature, but may still achieve a positive result. As Pasteur famously wrote, “Chance favors the prepared mind.”

~Wm. Doe, Ph.D., September 2019

SELECTED REFERENCES

Jaeggi, S.M. et al. (2008). Improving fluid intelligence with training on working memory. Proc Natl Acad Sci USA, 105 (19): 6829-33.

Meule, A. (2017). Reporting and interpreting working memory performance in n-back tasks. Frontiers in Psychology, 8: 352.

Melby-Lervag, M. & Hulme, C. (2016). There is no convincing evidence that working memory training is effective: A reply to Au et al. (2014). Psychonomic Bulletin & Review, 23(1): 324-30.

Redick, T.S., Shipstead, Z., et al. (2013). No evidence of intelligence improvement after working memory training: A randomized, placebo-controlled study. Journal of Experimental Psychology: General, 142 (2).

Au, J., Jaeggi, S.M., et al. (2016). There is no convincing evidence that working memory training is NOT effective: A reply to Melby-Lervag and Hulme. Psychon Bull Rev, 23: 331-37.

 

 

 

 

 

 

Homeless Scholar Blog ~ANTIBIOTICS and SEPSIS

Last month, I was in the emergency room for a cellulitis infection with abscess. The attending physician examined me and declared that I didn’t need an ultrasound and drainage, just an antibiotic. He prescribed Clindamycin, about which I was ignorant. Subsequent research informed me that this drug was more associated with Clostridium difficile infection, Stevens-Johnson Syndrome (aka toxic epidermal necrolysis) and colitis than most other antibiotics. I also learned that ultrasound and drainage of an abscess constituted “best practice”. So I returned to the following day to complain. I got another ER doc who complied with my request for those procedures plus doxycycline (also indicated for the condition) which I had taken without incident several years earlier for a hand infection. My research also told me about how easily untreated cellulitis can turn into sepsis. This post will be about those two topics: antibiotics and sepsis. (I suppose a third issue could be physician incompetence, but I think I’ll save that one for another time.)

Overall, these “miracle drugs” have clearly been a boon to humankind, although side effects seem to be glossed over in general discourse on the subject. Earlier, I mentioned colitis (a serious inflammation of the colon), and also C. difficile. This bacterium of the Clostridium genus, which includes the species for tetanus and botulism, is potentially fatal, causing severe watery or bloody diarrhea, fever, anorexia, abdominal pain, and electrolyte disturbances. Then, finally Stevens-Johnson Syndrome: a systemic skin disease producing severe, widespread rash, fever, and lesions. Skin loss may lead to dehydration, infection and death. These are fairly rare reactions, but ones which must be kept in mind. The more common side effects of antibiotics are mild rash, diarrhea, nausea with or without vomiting, loss of appetite, and photophobia. (With the doxycycline, I experienced no side effects whatsoever.) And then, of course, there is the potential for a serious allergic reaction (anaphylaxis).

Nature can produce its own antibiotics, but the first man-made one was synthesized in the laboratory of Paul Ehrlich in 1909. This was called Salvarsan, the “magic bullet” that cured syphilis. Alexander Flexing discovered penicillin by chance in 1928, but commercial development had to wait until the 1940s. For those with penicillin allergies, tetracycline was discovered in 1948 and erythromycin the following year. Vancomycin was discovered in 1952, rifampin in 1957, and metronidazole in 1962. In 2001, a new drug to treat MRSA (methicillin-resistant staphyloccus aureus) reached the market: linezolid. New soil technology enabled the development of teixobactin in 2015.

Then there is the serious issue of antibiotic resistance. In the U.S. alone, resistant bacteria cause more than 2 million infections and 23,000 deaths per year. Nothing about this phenomenon: the existence of bacterial resistance genes has been documented in permafrost sediments in the Yukon from 30,000 years ago. Three genes encoding resistance to B-lactam, tetracycline, and glycopeptide antibiotics have been identified. Even so, the problem is exacerbated by numerous modern factors, including both clinicians who prescribe antibiotics unnecessarily as well as agribusiness’ use of the drugs to promote animal growth. For many years now, antibiotic resistance has been recognized by health authorities as a global crisis.

“Without urgent, coordinated action by many stakeholders,” says Dr. Keith Fukida, the WHO’s Assistant General Director for Health Security, “the world is headed for a post-antibiotic era in which common infections and minor injuries which have been treatable for decades can once again kill.”

An often deadly sequela of an infection is sepsis, defined by Taber’s Medical Dictionary as “a systemic inflammatory response to infection, in which there is fever or hypothermia, tachycardia, rapid breathing and evidence of inadequate blood flow to internal organs.” A mnemonic for remembering the symptoms: S (shivering, fever, or very cold); E (extreme pain or general discomfort (“worst ever”)); P (pale or discolored skin); S (sleepy, difficult to rouse, confused): I (“I feel like I might die”); S (shortness of breath). Complications of sepsis include septic shock, metastatic abscess formation, and organ failure. The old term for “blood poisoning” (“septicemia”) has mostly been replaced by “bacteremia”. The risk of sepsis can be reduced by preventing infections, practicing good hygiene, and staying current with vaccinations. The leading cause of death in U.S. hospitals, sepsis is also the leading cause of readmissions to the hospital with 19% of people hospitalized with sepsis needing to be re-hospitalized within 30 days. As many as 87% of sepsis cases originate in the community. Approximately 6% of all hospitalizations are due to sepsis, and 35% of all deaths in-hospital are due to sepsis. As many as 80% of sepsis deaths could be prevented with rapid diagnosis and treatment. Mortality increases by as much as 8% for every hour that treatment is delayed.

The 21st Century Cures Act of 2016 authorized $6.3 billion, mostly for the NIH, to streamline the approval process for drugs and devices. The problem is, these items are being approved on weaker evidence, bypassing randomized controlled trials. The legislation has been opposed by consumer organizations that are trying to protect the public from dangerous, ineffective treatment. This would include antibiotics. To put it bluntly, capitalists are trying to push these dubious substances through to the public for quicker profits. Some of these new antibiotics are tigecycline (Tygacil), bedaquiline (Sirturo), dalbavancin (Dalvance), and ceftazidime-avibactam (Avycaz). As Sarah Sorscher, a lawyer for Public Citizen has noted, “[This bill] will be bad for patients and public health. In exchange for an ephemeral boost in research funding, the bill will pressure the FDA to approve dangerous new antibiotics and institutionalize a hospital payment system and other measures that promote antibiotics overuse and speed the development of drug resistance.”

A final note: September 2019 is Sepsis Awareness month.

~ Wm. Doe, Ph.D. – August 2019

Homeless Scholar Blog ~ LEARNED HELPLESSNESS REVISITED

“In view of the losses, marital disruptions, and personal rejections often experienced by depressed people, it is important to acknowledge that their negative thinking will sometimes not strike detached observers as farfetched or absolutely incorrect.” – Haaga & Beck, 1995; p. 45

“We speculate that it is expectations of a better future that most matter in treatment.” – Maier & Seligman, 2016

Learned helplessness is defined by Campbell’s Psychiatric Dictionary as “giving up trying, because the subject is unable to discover any way in which to influence the environment either in an attempt to succeed or in a search for escape from pain.” This is as good a definition as any I’ve come across. The concept emerged from animal studies by psychologist, Martin Seligman in the 1960s. In one of a series of animal learning experiments, two dogs were yoked together, but only one could stop electric shocks from coming to them. The other dog interpreted the shocks to be random and learned to be helpless, as evidenced by a later experiment in which he would not prevent further shocking even though that was easily accomplished by simply moving to another area. This work formed the basis for one of the most researched models of depression in cognitive psychology.

In the 1970s, Seligman, along with Lyn Abramson, developed  the hopelessness theory of depression, arguing that the LH model was limited in that it was unable to explain why certain individuals become depressed when confronted with an uncontrollable stressor whereas other did not. Childhood maltreatment (particularly emotional abuse/neglect leads to “negative inferential styles” (their negative traits rather than the situation are seen as causing dysphoria, and this cannot change, so the future appears bleak). Subsequent painful experiences amplify the negativity, and this leads to hopelessness depression.

Recently, Seligman and his co-experimenter of yore, S.F. Maier, published a paper declaring that “the original theory got it backward. Passivity in response to shock is not learned. It is the default, unlearned response to prolonged aversive events.” They speculate that cognitive-behavioral therapy is the solution to the problem as it teaches control by training the prefrontal lobes to control the serotonergic activity of the dorsal raphe nucleus, which activity inhibits escape from aversive stimuli.

The pessimistic explanatory style of learned helplessness is relevant to depressive consequences of numerous physical conditions as well, such as myocardial infarction (MI), multiple sclerosis, fibromyalgia, and rheumatoid arthritis. It can compromise immune response and recovery from health problems in general. In the case of MI, for example, a 2017 study found statistically significant direct relationship between LH and depressive symptoms and advised that “in developing post-acute MI treatment plans, health care staff should focus on psychologic points of intervention to the same extent as physiologic interventions” (Smallheer et al, 2017).

Regarding treatment, Seligman et al. advocate cognitive behavioral therapy and the “learning of optimism”. But can there be a meaningful heuristic indicative of when positive thinking is advisable, and when not? Broadly speaking, in ambiguous situations, when action informed by optimism does not seem as hazardous or self-defeating as action dictated by pessimism, then the former option should probably be taken. It is difficult to generalize about this, but for a person whose predicament is constrained by adverse physical and financial factors, it may be, more often than not, realistic to conclude that ‘it won’t work out, so why gamble away precious resources (mental, physical, financial)?”

However, sometimes, this is a self-defeating, knee-jerk mode that should be challenged. What determines whether it should be challenged is the intensity of the ambiguity and perceived differential harm for the choice. That is, one should ask, “Is it irrational to be pessimistic in this context? How certain is an unsafe or miserable outcome?” Compelling research suggests that mild-to-moderately depressed people do indeed see things accurately (hence the term “depressive realism”), but there is often a perception of ambiguity and uncertainty involved. The ambiguity brings a sort of interpretive latitude into play which may occasionally justify a “strategically optimistic” choice. But this is not the same as a “global,” or dispositional, behavioral change in attitude: this space opened up for competing, reasonable interpretations is contingent upon the ambiguity and risk calculus, and is thus not an across-the-board change over to “positive thinking”.

And it bears repeating that socio-economic factors beyond individual control have to be taken seriously by therapists/counselors when dealing with certain depressed clients. We must think in the tension of self-sabotage and the ruthlessness of a system that, unless externally constrained, always puts profits before people, with sometimes psychologically disastrous results.

Related to which is a factor even more pressing: that of climate change/global warming. A recent study has explored learned helplessness in this context. “These findings suggests that learned helplessness acts as a barrier to pro-environmental behavior in the face of environmental concern…Recent surveys show that among individuals who are concerned about the environment, the most common associated emotional feeling is one of helplessness” (Landry & Gifford, 2018).

I have heard people express this sense of helplessness regarding the planet’s future; even, “It was probably too late years ago.” Here, ecological science, being of a “harder” sort, may be more informative than cases of abnormal psychology, and it points to the conclusion that  we are not, at this time, helpless. The remarkable, massive, international protests of climate policy in recent months, by mostly school children and other young activists highlights the belief that there is something that can be done to avert disaster. But the science also says that the window of opportunity is rapidly closing.

~ Wm. Doe, Ph.D. – July 2019

SELECTED REFERENCES

Haaga, D.A., Beck, A.T. (1995). Perspectives on depressive realism: Implications for cognitive theory of depression. Behav Res Ther, 33 (1).

Maier, S.F., Seligman, M.E. (2016). Learned helplessness at fifty: Insights from neuroscience. Psychol Rev, 123 (4).

Smallheer, B.A. et al. (2017). Learned helplessness and depressive symptoms following myocardial infarction. Clinical Nursing Research, 27 (5).

Landry, N., Gifford, R. (2018). Learned helplessness moderates the relationship between environmental concern and behavior.  J Environmental Psychol, 55.

 

 

Homeless Scholar Blog ~ GLOBAL WARMING SO FAR

“As a dam built across a river causes a local deepening of the stream, so our atmosphere, thrown as a barrier across the terrestrial rays, produces a local heightening of the temperature at the Earth’s surface.” – John Tyndall, 1862

Even before John Tyndall’s experiments, the French scientist Joseph Fourier (also known for the Fourier transform, the math which makes the MRI coherent), realized that Earth’s atmosphere retains heat radiation, thus discovering the “greenhouse effect”. Fourier’s work was extended in the early 20th century by Svante Arrhenius, who was the first to make actual calculations of it.

The United States Geological Survey (USGS) which is part of the Dept. of the Interior, notes that global warming is just one part of climate change. The former “refers to the rise in global temperatures due mainly to the increasing concentrations of greenhouse gases in the atmosphere” and the latter “refers to the increasing changes in the measures of climate over a long period of time–including precipitation, temperature, and wind patterns.”

The Union of Concerned Scientists has listed 10 signs of global warming: Arctic sea ice extent is diminishing; ocean heat content is increasing; air temperature over ocean is increasing; sea surface temperature is increasing; global sea level is rising; humidity is increasing (causing more warming); temperature of the lower atmosphere is increasing; air temperature over land is increasing; snow cover is reduced, and snow is melting earlier; and glaciers are melting.

Increased evaporation from rising temperatures will lead to more storms as well as more drying of some land masses. “As a result,” says NASA, “storm-affected areas are likely to experience increases in precipitation and increased risk of flooding, while areas located far away from storm tracks are likely to experience less precipitation and increased risk of drought.”

While there are “climate skeptics” among scientists–arguing that current modeling is inaccurate, that natural (or unknown) causes are to blame and/or that global warming will have few consequences–the general consensus is that the planet is warming in an ominous way and that this has been caused by Homo sapiens. “Some scientific conclusions or theories,” wrote the United States National Research Council,” have been so thoroughly examined and tested, and supported by so many independent observations and results, that their likelihood of subsequently being found to be wrong is vanishingly small. Such conclusions and theories are then regarded as settled facts. This is the case for the conclusions that the Earth system is warming and that much of this warming is very likely due to human activities.”

In an attempt to set internationally binding emissions reduction targets, the Kyoto Protocol was adopted in 1997 and entered into force in 2005. The US withdrew from it in 2001, and Canada ten years later, but there are still 192 parties on board. The Paris Agreement (2015) may succeed because it allows individual countries to adjust their climate strategies, and there are no legally binding terms in it.

The Intergovernmental Panel on Climate Change is the United Nations body which assesses the science related to climate. Formed in 1988, it has published a series of reports on climate change, the most recent being one in 2018 on global warming of 1.5 degrees Centigrade. The IPCC has stated that “Climate-related risks to health, livelihoods, food security, water supply, human security, and economic growth are projected to increase with global warming of 1.5 degrees Centigrade [above pre-industrial levels] and increase further with 2 degrees C. (Although the Industrial Revolution dates back to the 18th century, the IPCC is using “pre-industrial” to refer to the latter half of the 19th century, when industrialization intensified.) If the rise in global temperature is not stabilized at a maximum of 1.5 deg C., by approximately 2030, climate scientists warn that the process will become uncontrollable, with widespread drought, floods,, and increases in extreme heat as well as in poverty. “As the report outlines,” notes the Climate Reality Project,  “if we want to hold the line at 1.5 degrees, we have to slash emissions by about 45% from 2010 levels by 2030. Then, we have to reach net-zero around 2050.”

(A word about albedo: This is the solar reflective power, the fraction of incident light or electromagnetic radiation that is reflected by a surface or body (as the moon, a planet, a cloud, the ground, or a field of snow). Its relevance to global warming is not hard to see. As icy surfaces decrease in area, less energy is reflected into space, and the earth will warm up even more, accelerating sea level rise. (This seems straightforward enough, but there are complex and even contradictory findings in albedo research, thus definite statements about the future with respect to the albedo effect cannot reliably be made at this time.))

In May 2019, the UN released a major report on biodiversity, stating that a million species of plants and animals are at risk of extinction due to deforestation, overfishing, pollution, and other human activities. “In parts of the ocean,” says National Geographic, ‘ little life remains but green slime.”

An ecologist at the University of California has stated that “we’re in the middle of the sixth great extinction, but it’s happening in slow motion.” A mass extinction event (also known as “biotic crisis”) occurs when the rate of extinction increases with respect to the rate of speciation. The oldest extinction was the Ordovician (445 million years), likely caused by an ice age. This was followed by the Devonian (375-360 million years ago; oceanic oxygen depletion); the Permian (252 million years ago; asteroid impact and volcanic activity; 95% of species were lost); Triassic (200 million years ago; probably also volcanos); Cretaceous (66 million years ago; asteroid; 75% lost). Arguably, a sixth mass extinction period has been underway for over 11,000 years. This is referred to as the Holocene (also Anthropocene) geological era, and is the result of human activity.

On its website, Extinction Rebellion describes itself as “an international movement that uses non-violent civil disobedience to achieve radical change in order to minimize the risk of human extinction and ecological collapse. Formed in the UK in May 2018, they have engaged in various acts of civil disobedience in an attempt to disrupt “business as usual” and heighten awareness of the emergency nature of the climate situation.

Certainly individuals can do their part to reduce their carbon footprint by, for example, eschewing air travel, driving fuel-efficient vehicles, insulating their homes efficiently, switching to energy-efficient lights and in general, not wasting electricity by keeping thing on needlessly. Also, as a community effort, they can buy carbon offsets to pay for reduction in greenhouse gases. But more important is governmental action; setting effective regulations to reduce carbon emissions. This will, of course, mean cracking down on corporate entities that cannot be expected to regulate themselves adequately. Transportation must end its reliance on fossil fuels. Hydro, wind, and solar electricity production must be greatly increased; agricultural practices must be better controlled (fewer fertilizers, less industrial beef production, more crop rotation and tilling of the soil); abolition of deforestation and irresponsible land use.

“We cannot solve a crisis without treating like a crisis,” says the prominent 16-year-old Swedish climate activist, Greta Thunberg, now famous for her (ongoing) school strikes. “If solutions within the system are so impossible to find, then maybe we should change the system itself.”

~ Wm. Doe, PhD – May 2019

 

 

Homeless Scholar Blog ~THE FILTHY RICH: TO SOAK or NOT TO SOAK?

During the regime of that wild-eyed radical Dwight D. (“Ike”) Eisenhower, the marginal tax rate on the highest income bracket in the United States was 91% and stayed at or above 70% until 1981 when Ronald Reagan declared America to be “back”. Thus, Rep. Alexandria Ocasio-Cortez’s recent suggestion of a restoration of the 70% level should hardly seem extremist. The United States does not have a wealth tax at present, but one has been proposed by Senator Elizabeth Warren, based on research by economists Emmanuel Saez and Gabriel Zucman. (A wealth tax is basically a levy on total assets.) The plan is to impose a 2% tax on fortunes worth more than $50 million and a 3% tax on those worth more than one billion dollars. This would affect about 75,000 families (less than 0.1% of U.S. households, raising an estimated $2.75 trillion over a ten-year period. (There have been some concerns raised as to the constitutionality of the proposal, but the majority of experts appear to view it as constitutional.)

As they say on the radio, Let’s do the numbers: In the Great Depression, Franklin Roosevelt’s administration established a 94% maximum marginal income tax rate, which dropped, as noted, only to 91% during the Eisenhower years. John Kennedy maintained this with a 25% maximum long-term capital gains tax. Under Lyndon Johnson, the income tax rate dropped to 77% (for annual figure exceeding $400,000) and the LTCG increased by 2 points. Richard Nixon kept the maximum marginal income tax the same but pushed the capital gain tax up to 37%. Gerald Ford pushed it even higher, to 40%, with a 70% income tax rate on $200,000 and up. Ronald Reagan kept the income tax max about the same, dropped the CGT to 20%, but increased it later to 28%. George Bush, Sr. had the CGT at 28% with the income tax at 31% for $86K and up. Clinton kept the latter the same, but decreased the capital gains tax by 8 percentage points. George Bush, Jr. started the income tax at 39%, then dropped it to 35%, with the CG starting at 21% then dropping to its current rate of 15%. Under Barack Obama, the income tax was 35% for annual sum of $388,000 and up, with the CG unchanged. Finally, Donald Trump’s maximum marginal tax rate for $560K and up (an average of single and married filers) is 37%, with the maximum long-term capital gains tax as under the second term of Reagan: 20%.

Better known than Saez and Zucman is their colleague at UC Berkeley, Thomas Piketty, who in 2014 released the unlikely best-seller, Capital in the 21st Century, a fat, graph & math-packed tome purporting to demonstrate that capitalism is seriously dysfunctional, because of the accelerating economic inequality it has engendered and as a result, is a threat to democracy. Piketty starts with the idea expressed as r > g ; that is,  rate of return on capital (wealth in general) is exceeding economic growth as measured in income or output. Capital and the money it produces accumulates faster than economic growth, a phenomenon which has intensified in the last four decades with financial deregulation. However, shortly after the book was published he wrote in an article for the American Economic Review, “I do not view r>g as the only or even primary tool for considering changes in income and wealth in the 21st century. Institutional changes and political shocks…played a major role in the past, and probably will…in the future.” He then adds that the equation is not useful for dealing with the rising inequality of labor income.

Piketty has been criticized from across the political spectrum, on methodological as well as theoretical and practical grounds, but for me personally, as a leftist freethinker (for lack of a better term), the most serious criticism centers on why he thinks reformism will rein in the plutocrats at this late date when there is no nominally communist superpower to embolden a social democratic alternative.

Numerous countries have or had wealth tax, and the example of France has been in the news a lot in recent years. The French tax was introduced in 1982 by Francois Mitterand. Thirty years later, Francois Hollande imposed a 75% supertax on earnings of 1 million Euros and above. French courts reversed themselves on its constitutionality after a redraft, but what really sank it was capitalist flight. Emmanuel Macron (“the President of the rich”) scrapped the tax, replacing it with one much more favorable to the upper crust, one which mostly just covered real estate. More on Macron presently.

There are various arguments against a wealth tax but the one most often brought up is that of tax evasion and capital flight, i.e., the fat cats can always find a way around it, even if they have to do move to another country to achieve this. The research group, New World Wealth, reported that 10,000 millionaires left France for this reason in 2015; 12,000 left the following year.

How do the critical economists propose to deal with this problem?

More stringent penalties appears to be the answer (easier to do in the U.S. than in France because of different financial law). In America, there is already a punitive “exit tax” of 40% on the net worth about $50 million of any U.S. citizen who renounces their citizenship. Moreover, there is the Foreign Account Tax Compliance Act (FATCA) which legally obligates foreign financial institutions to identify accounts held by U.S. citizens and report details to the IRS. In their 2019 paper, “How Would a Progressive Wealth Tax Work”, Saez and Zucman wrote, ‘Just like for legal avoidance, illegal evasion depends on policies and can be reduced through proper enforcement. Key to reducing evasion are (i) the collection of comprehensive data; (ii) sanctions for the suppliers of tax evasion services (the countries and financial intermediaries that facilitate it); (iii) proper resources for auditing.” Of course, it is unrealistic to think the problem can be removed completely, but the hope is that it can be reduced to an acceptable level.

There is a side issue which I’ll just touch on here: viz., when should taxing begin, period? Last year, Ontario announced a coming tax exemption for low-income worker: those earning less than 30 K will no longer have to pay provincial income tax. One wonders: how could this be applied to the U.S. federal taxation system? As things stand now, in the U.S., $12,100 is a poverty line of sorts, below which one need not file. Should taxation kick in at $20,000? $30,000? There needs to be a national discussion about a new, higher cutoff. The lower working class is overtaxed; this can and should be remedied.

In recent months, there has been significant political fallout from Macron’s economic policies. Since last November, there have been weekly protests, sometimes violent, all over France from working class people fed up with increasing poverty and plutocracy. These are the so-called Gilets Jaunes (yellow vests), named after the colored jacket required for all cars in the interest of public safety in the event of an accident.) In a blog post from last December, Piketty wrote: “If Macron wants to save his five-year period in power, he must immediately re-instate the wealth and allocate the revenue to compensate those who are most affected by the rises in carbon tax, which must continue.” As of this writing, the wealth tax has not been re-instated.

Back to the American situation: to liberate money to shore up the social safety net and implement a strong climate change program, Congress should cut the bloated war budget down to a rational size. If this seems unrealistic, all the more reason to soak the filthy rich with both a wealth tax and a marginal income tax of at least a 70% rate, as well as strengthening the inheritance/estate tax. Starting at $10 million, as suggested by AOC, is reasonable. No multimillionaires will be knocked down into the middle class thereby.

In sum, the reforms should be threefold, leaving aside the aforementioned possible rise in zero taxing of the lower stratum: (1) an increase in the estate tax;  (2) 70-80% marginal income tax starting at $70K, or thereabouts; (3) and the wealth tax proposed by Saez and Zucman, but preferably higher.

~ Wm. Doe, PhD – April, 2019

 

 

 

 

Homeless Scholar Blog ~ RENT CONTROL and the U.S. HOUSING CRISIS

“…There is no political reward in helping the poor. But what makes you think that the man who sits in judgment between the landlord and the tenant must have the mentality of a rent collector?” – Judge Sam Heller, who presided over L&T court during the Depression in Chicago *

Because of poverty, I lost my apartment in the last semester of a Ph.D. program and was forced to sleep in my friend’s rented garage. As with homeless shelters (which I’ve known from my more distant past), I had to clear out first thing in the morning as he came in early to prepare the copper for his roofing jobs. To urinate, I had to carefully descend rickety wooden steps to use a moldy, abandoned shower stall in the basement. I took showers at a homeless services center. Occasionally, some random individual from the alley would try to open the garage door in the middle of the night.

I slept there for three years, until my name came up on the public housing wait list. Some homeless people probably die before their number comes up.

Unaffordable housing continues to be a crisis, and unregulated landlords are a large part of the problem. In a market-intensive society like America, landlords pretty much rule. There’s little regulation, less rent control than there’s been in decades. Moreover, a growing number of landlords refuse to honor Section 8 vouchers. Some states and cities ban housing discrimination against the poor, a ban that should be extended nationwide. And tales of slumlords abound, of course, but I’m less interested in the “bad actors” narrative than in out-of-control private property ownership…However, the social philosophy and ethics of property ownership is beyond the scope of this article. Given the constraints of capitalism, landlords as a class cannot be swept away and entirely replaced by a state housing function or a radical communitarian arrangement. In this article, the landlords will be dealt with through the issue of rent control and more generally, that of (un)affordable housing.

Efforts to control rent governmentally go back several decades earlier, but the modern era of rent control started in 1943 with rent ceilings, also known as “vacancy control”, that is, rent restrictions remain in place when the unit re-rents. (With “vacancy decontrol,” rent restrictions turn off when there’s a new tenant.) Most so-called rent control is actually vacancy decontrol: landlords can raise rents considerably after a vacancy, and they are allowed to pass on maintenance costs. RC laws enacted in the 1970s exempted newly constructed buildings from control. The New York City Metropolitan Council on Housing states in a fact sheet that controls are “not the sweet deal that is often portrayed in the media and by landlords intent on ending rent protections. Most rent-controlled tenants pay a 7.5% increase each year, the median income is under $22,000 a year, and the majority pay well over 1/3 of their income towards rent.”

Soft rent control laws became widespread in the 1970s, as a result of tenant rights’ activism, but were hardly punishing to the property owners, who were given financial and legal breaks. However, despite these moderate provisions, landlords vigorously opposed all forms of rent control. Politically, the Zeitgeist turned conservative, and the tenants’ movement weakened. Rent control was defeated in Massachusetts in a 1994 referendum numerous legal scholars considered unconstitutional. This resulted in a doubling of median rents and the number of homeless in the Boston area. Five years later in California, rent control was significantly weakened statewide.

The ’94 Mass. Referendum is an interesting case study in political corruption. As mentioned, it was deemed unconstitutional by some legal experts. Instead of the referendum question asked only of residents in Boston, Cambridge, and Brookline, the areas to which it pertained, the property owners associations got in a back room with the State Attorney General and made it statewide to include the conservative western part of Massachusetts. Even so, the landlord lobby only narrowly won.

More recently, there was a major defeat of rent control in California: the Proposition 10 referendum of 2018. This measure would have repealed the Costa-Hawkins Rental Housing Act, a 1995 law forbidding cities from capping prices when a unit becomes vacant and from imposing rent control on single family homes. Needless to say, rent is largely out of control, and property owner associations want to keep it that way. They spread misinformation throughout the media and outspent rent control proponents 3 to 1 to deliver their message. And in Chicago, a struggle continues to vote in RC measures but this has been only ward by ward. The ultimate goal is to get the General Assembly to repeal the Rent Control Preemption Act of 1997, which prohibits any Illinois municipality from adopting rent control programs.

Oregon is positioned to be the first state to enact statewide RC, but many progressives are critical, as the bill’s limits on what landlords will be able to do are perceived as rather lenient: the limit on rent increases is generous (7%), there is an exemption for buildings less than 15 years old, and landlords can still push tenants to leave if the former intend to make substantial renovations. New York State Senator Julia Salazar has submitted a “good cause” eviction bill to protect tenants from being thrown out when the landlords jack the rent up. The annual increase cap proposed is 3.3%. A similar bill has already been signed into law in Philadelphia.

The Nolo legal group says in a fact sheet that 32 states have laws prohibiting local rent control ordinances.

“In the end,” notes the International Encyclopedia of the Social Sciences, “landlords are able, through the setting of high initial rents, to do just as well under tenancy rent control as in an unregulated market and therefore supply is not affected.” The combination of rent control, higher wages, and expanded public housing would work together to solve the affordable housing problem.

 Low-cost housing is disappearing from the market, and America isn’t building enough homes. The issue of widespread housing unaffordability is naturally linked to the greater issue of declining social spending. The United States spends considerably less on social welfare than do European nations. According to the Social Expenditure Database of the Organisation for Economic Co-operation and Development (OECD), the U.S. ranks 21st in social welfare spending as a percentage of GDP (the top countries being France, followed by Finland, Belgium, Italy, and Denmark. This is based on 2016 data.) In the ranking for per capita social welfare spending (2013 data), the U.S. is 13th (the top countries being Luxembourg, followed by Norway, Denmark, Austria, and Belgium).

In 2000, the rank of the U.S. for social spending as a percentage of GDP was even lower, and in the following year, a group of American economists (Alberto Alesina of Harvard et al.) addressed the question of why there is not a European-style welfare state in America. They concluded that “European countries are much more generous to the poor relative to the US level of generosity…The differences appear to be the result of…racial animosity [which] makes redistribution to the poor, who are disproportionately black, unappealing to many voter.”  Aside from the cultural, political explanations are advanced. America’s lack of proportional representation, they claim, prevented the growth of a socialist party. Moveover, “America has strong courts that have routinely rejected popular attempts at redistribution, such as the income tax or labor regulation. The European equivalents of these courts were swept away as democracy replaced monarchy and aristocracy.” **

In 2018, the Joint Commission at Harvard University released a report on the state of the nation’s housing and noted the following: Millennials and immigrants help drive household growth; housing demand is shifting from renting to owning; the single-family housing market tightened to historically low levels in 2017; and there has been a slowdown in rental demands.

More importantly, the following challenges were highlighted: “Cost burden” has significantly increased; that is, nearly a third of all households paid more than 30 percent of their income in 2016. Nearly half of renter households are cost-burdened, and more than half of these households pay over 50 percent of their income for housing. For the lowest income quartiles, this meant a drop in post-rent income from $730 in 2007 to $590 and only $490 for households with children. “The national median rate,” states the report, “rose 20 percent faster than overall inflation between 1990 and 2016, and the median home price rose 41 percent faster.” Income inequality played a major role in these increased housing costs. Only 1 in 4 very low-income households received rent assistance; the number of poorest households rose from 6 million in 2005 to 8.3 million 2015. Moreover, homelessness increased in 2017, and although 1.4 million is the figure cited in the report, the number is almost certainly larger in reality, as undercounting is a well-known factor in this problem.

In the matter of social goods and services involving human rights (e.g., food, shelter, health care, education), it is the legitimate function of the government to be strongly involved. Since third parties in the US are structurally not permitted to wield significant influence, short of revolution (highly improbable here), the only way forward seems to be a radical working-class mass movement, like that of the Gilets Jaunes in France, to act as an unavoidable pressure on the political class to take action for economic justice.

……

*from a 1971 interview with Studs Terkel

**https://www.nber.org/papers/w8524.pdf

…….

~ Rylan Dray, PhD – March 2019

 

Homeless Scholar Blog ~ AGITATED DEPRESSION

The stereotypical image of a depressed person is one of persistent sadness, lack of energy and loss of usual interests, often including food; social isolation, and statements/feelings of helplessness, hopelessness, or self-denigration. By contrast, agitated depression (AD) is characterized by, among other things, extreme irritability, anger, restlessness, racing thoughts and incessant talking, outbursts of complaining or shouting, pacing, and picking at the skin. Causes and triggers or the disorder include traumatic events, long-term stress, hormone imbalances, bipolar and anxiety disorders. Suicide is a definite risk with this syndrome.

According to the DSM-V, at least one episode of major depression is required for the diagnosis. Also required are at least two of the following: psychomotor agitation, or physical symptoms of agitation and restlessness; racing and crowded thoughts; psychic agitation, or intense inner tension. An Italian study noted as far back as 2004 that AD was common in depression outpatients, and emphasized the importance of getting the subtyping right, as certain antidepressants might increase agitation. (This resonates with my own personal experience with the alarmingly stimulating Prozac, even at a very low dose.) This study defined AD as a major depressive episode with four or more hypomanic symptoms.

Considering the phenomenon of agitation per se, it should be remembered that the disorder may not be of psychiatric origin. A recent commentary in the Annals of Emergency Medicine notes the many different possible medical causes for agitation in a patient: hypoglycemia, hypoxia, delirium, intracranial injuries, and encephalitis, to name just a few. (Occasionally, the problem can be easily solved by simply asking why the patient is agitated. De-escalation, when it can be done safely, should always be tried, as opposed to immediately jumping to more extreme measures. “A turkey sandwich,” the authors observe, “has been known on occasion to de-escalate a smoldering patient.”) Often, though, physical restraints are necessary; however, patients can be injured by fighting against them. Proper method is essential, as is proper choice of drug therapy. Antipsychotics and benzodiazepines are both good options. The “B-52” (Benadryl, 5 mg. of haloperidol, and 2 mg. of lorazepam) is often used, but it can potentially prolong the QT-interval. Ketamine can be effective in rapidly controlling an acute and potentially violent patient. An example is the patient with excited delirium who is “violent, shouting, hyperactive, and hyperthermic, [has] unexpected strength, and may progress to sudden cardiopulmonary arrest, even with intervention.”

Melancholia, a term going back to antiquity and lasting well into the 19th century, assumed many forms, some of which would correspond to mixed states like agitated depression today. Koukopoulos & Koukopoulos (1999) include a history of this various phenomenon in order to shed light on the complexity of the contemporary diagnosis of AD. By the mid-19th century, “a significant number of melancholias became a component of a more complex disease entity and lost their nosological independence. This…led eventually to the creation of ‘manic-depressive insanity’ by Kraepelin in 1899 and the definitive substitution of the concept of melancholia with that of depression,” Again, from the practical standpoint, understanding the distinction between AD and the more “standard”, sad and lethargic form of depression is essential to avoid psychopharmacological errors which sometime lead to suicide attempts (Healy, 2004).

Psychotherapy is often indicated, which may be psychodynamic in nature, or more likely, cognitive-behavioral therapy. The patient will need to learn coping skills, especially, the ability to detect triggers and how to prevent them from setting off the pathological behaviors.

Regarding drug treatment, good results have been obtained with typical and atypical antipsychotics, anticonvulsants, and benzodiazepines, with olanzapine being particularly rapid and effective. Lithium is occasionally useful and severe cases may call for shock therapy.

Koukopoulos & Koukopoulos (1999) make a distinction between agitated depression and depressive anxiety, with the former often responding to antidepressants. They note, “The anxiety observed in AD is inherent in the agitation itself…The despair, the undirected and groundless rages and the violent, suicidal impulses of the more mundane agitated depression of today, seem to be caused by that ominous, dark force…so violent that it cannot be anything but manic in nature.” To treat such an agitation with SSRIs-as-usual (or any antidepressant) can lead, and has led, to disaster.

~ Wm. Doe, Ph.D., CPhT – February 2019

SELECTED REFERENCES

F. Benazzi (2004). Agitated depression: A valid depression subtype? Progress in Neuropharm & Biol Psychiatry, 28.

D. Healy (2004). Let Them Eat Prozac: The Unhealthy Relationship between the Pharmaceutical Industry and Depression. New York: NYU Press.

A. Koukopoulos & A. Koukopoulos (1999). Agitated depression as mixed state and the problem of melancholia. Psychiatric Clinics of North America, 22 (3).

J. Mason et al (2018). Agitation crisis control. Annals of Emergency Medicine, 72 (4).