Slow down and chew a book: About “Notes on the Death of Culture”

My book collection is about 95% nonfiction. There are many of what my husband calls “long-haired books” (a derogatory term taken from, I think, Foghorn Leghorn). He is amazed that I stay committed to reading volumes he considers school “textbooks”. I’m a fan of reality; I attempt to understand the world. So what? Thought and introspection is considered tedious in these days of our colossal array of cultural activities, rapid fire news and opinions, and a fast-paced, fit-it-all-in lifestyle including commitments to work, family and leisure. But engaging with a book is time I have to ponder and to learn, to sloooooooow doooooowwwwnnnnn.

I tried modern fiction. I don’t much like it. Every month Amazon Prime gives me a choice of a free download. I’ve gotten three, made it through two, and was unimpressed. They just didn’t grab me. My preference is for well-written nonfiction narratives and essays.

This one might be impressive, I thought, as I spied Mario Vargas Llosa’s Notes on the Death of Culture on my local library’s list of new arrivals. “Essays on Spectacle and Society” – only 240 pages. The Nobel laureate discusses the decline of intellectual life and his problem with global culture.

Notes turned out to be a moderately difficult book to digest. It took me over a week to read as I made my own notes (which I almost always do in order to remember what I read) and grappled with these ideas. This was one of those books that you don’t (or probably shouldn’t) sit back, absorb, and nod, but one where you pause, look away from the page, and think about whether you agree with his premise and why. Read More »


Cryptozoology and Myth, Part 1: The Illusion of Facticity in Unknown Animal Reports

What can we make of folklore tales that cryptozoologists use to support claims that an unknown animal has been historically reported and remains to be identified?

Cryptid researchers say that modern reports of Bigfoot-Sasquatch, lake monster, sea serpents, giant flying animals, and elusive land creatures are supported by the stories of native people, legends or myths and sagas. Are these stories evidence? Can we reach back in time to use old tales to reinforce and help explain modern sightings of cryptids?

lmtI’m not well-versed in folkloric studies just with a few pop culture college electives to my credit and casual observation for many years. But I heard from respected others that a modern interpretation and application of ancient cultural tales to the cryptozoology field was problematic. I wondered exactly why. The frequently cited source for understanding this aspect of cryptozoology is Michel Meurger’s Lake Monster Traditions: A Cross Cultural Analysis which I obtained.

There is much to digest in this book, translated from French. I do note that the translation does make it difficult sometimes to decode the meaning but it’s not incomprehensible.

I intend to write a series of posts exploring the author’s treatment of this material and his recommendations of how we should consider it for cryptozoological research.

The preface and introduction alone gave a jolt to my thinking. A review of what it contained was perhaps worth sharing for those who have not been introduced to these ideas. It’s obvious that the work still applies to today’s modern TV and internet-based cryptozoologists.


Read More »

The 1988 US Army commissioned report on Enhancing Human Performance

It was news to me that back in 1985, the US Army commissioned an analysis of certain techniques that were proposed to enhance human performance. The Army Research Institute asked the National Academies to form a committee to examine these questionable strategies. The report is available here where you can read it for free.

Enhancing Human Performance Issues, Theories, and Techniques (1988)
Daniel Druckman and John A. Swets, Editors; Committee on Techniques for the Enhancement of Human Performance, National Research Council

The following is my takeaway from this curious report.

The committee’s task was to “evaluate the existing scientific evidence for a wide range of techniques that have been proposed to enhance human performance” and to “develop general guidelines for evaluating newly proposed techniques and their potential application”. (p 15)

The committee looked at the relevant scientific literature and unpublished documents; each sub committee reported on their findings. Personal experiences and testimonials were not regarded as an acceptable alternative to scientific evidence, even though, as they note, people may hold them with a high level of conviction.

The study was prompted by military people who may have been well respected and felt these phenomena had military potential, as learning and communication tools, or as threats or aids to defense. For example, random number generators (RNGs) were used to test for the ability of micro PK (psychokinesis). Those with this ability were said to be able to mentally bias the machine to produce non-random numbers. Ideally such power could be used to affect enemy equipment.

Some types of enhancements examined are not that well-known to me or in my realm of interest: learning during sleep (concluded no evidence but a second look is warranted), accelerated learning (found little scientific evidence, but more investigation is needed), guided imagery, biofeedback, split brain effects, stress management, cohesion, influence, and parapsychology. (“The committee finds no scientific justification from research conducted over a period of 130 for the existence of parapsychological phenomena.” Therefore, the Army should drop it.) It was this last section, a subcommittee chaired by Ray Hyman, that was my focus.

I found the entire report to be readable and rather interesting and wondered why I hadn’t come across it before. If anything, the appendix of key terms at the end is extraordinarily useful.

The parapsychology section included examination of extraordinary mental abilities – remote viewing, micro PK, and the Ganzfeld technique for enhancing telepathy. I was familiar with the claims for remote viewing and Hyman’s critique of the Ganzfeld. I was interested in the state of parapsychology, having examined it through the Hyman/Honorton exchanges, therefore, this report added to my knowledge. I also knew of the academically-framed lab work of Jahn. Here in one place is a science-based committee fairly assessing ALL the evidence of these alleged paranormal powers. They concluded that none of it had merit and the military gave up on efforts to incorporate these techniques.

The committee concluded that after 15 years of research, the case for remote viewing was very weak, virtually nonexistent. There were certainly claims by some researcher of a clear effect but these claims were exaggerated. Two research programs – Helmut Schmidt and Robert Jahn (PEAR) made up 60% of the experiments that had been conducted. Their results revealed a small departure from chance. A tiny effect is enhanced by the volume of studies that were incorporated. The report notes Jahn did 78 million trials! The more studies that show a tiny effect end up looking statistically significant when grouped together. But regardless, the effects were extremely weak. The parapsychology committee argues that most influential positive effect in Jahn’s massive database is the result of testing one person. This is not a robust set of data.


In science, anomalies have a definition – they are a precise and specifiable departure from a well-defined expectation. In parapsychology, however, anomalies mean everything. They are vague and undefined – anything that looks odd is considered. With this wiggly definition, any one anomaly can have an infinite variety of possible causes, not all the same. That’s not particularly useful.

Because parapsychologists do not have a theory to explain the anomalies, there is no way to show that the anomaly of one experiment is the same as the anomaly in another. Without a theory to hang the data on, we do not have a coherent class of phenomena. Arguments are made that “There’s something there.” Perhaps there is. Odds are, it’s not something paranormal, it’s an artifact of the testing.

Then there is Cleve Backster who experimented on plants, testing them with a polygraph. His astonishing work on plant responses was popular in the press and appeared to be influential. People believed his study was scientifically solid. But it wasn’t. It was not repeatable with controls.  The questionableness of his work never got out to the wider audiences. The idea of “bioenergetic fields” as discovered by Backster, was put forth as part of the explanation for dowsing, energy healing and remote viewing. The idea of plant telepathy and special perception is still supported by New Age purveyors. The Backster idea was something certain people WANTED to believe in.

It’s a rare case, as noted in the report, that a person can make a distinction between his subjectively compelling personal belief and that which is scientifically justifiable. I’d previously researched this with regards to the interaction between Charles Honorton and Ray Hyman. Hyman’s 3 types of criticisms show up in this report:

  1. Smoking gun – cause is due to factor X
  2. Plausible alternative – cause could be due to factor X
  3. Dirty test tube – cause is from some artifact resulting from unacceptable standards

The dirty test tube critique was used by Hyman to criticize the Ganzfeld results. (And also the basis of Jim Alcock’s critique regarding remote viewing).

Honorton eventually agreed with Hyman that the Ganzfeld experiments were not of optimal design, but insisted that didn’t affect results. If the scientific methods are not appropriate, error creeps in, the results are unreliable. In the conclusions of the parapsychology section, the committee determined that what they found, the research methods and results, were too weak to establish the existence of paranormal phenomena. Thus, it was recommended that such techniques were not worthy of investment.

Yet, you will regularly encounter those who INSIST remote viewing works and has been successfully used. And there are those who insist parapsychology is/was successfully used by the military, and will eventually breakthrough and show all of us naysayers. I doubt it. It’s been a very long time, there’s been plenty of opportunity, but they’ve produced nothing convincing. If the military discarded the idea that the mind can be used as any sort of extrasensory tool or weapon, that clearly signals it’s not worth academic efforts to pursue either.


Rock and roll and the occult – A Book Review

Season of the Witch: How the Occult Saved Rock and Roll
by Peter Berbegal (2014)

seasonSaved it from what? I’m not clear. From “sugary teenybopper purgatory”? Meh. I don’t think the “occult” interest was the key aspect. Culture was changing and music reflected this. Pressing our conscious bounds outside the norm is the way of all art and creativity. Perhaps use of occult themes was one convenient path; but it was also widely used for just theatrics and to gain attention.

This book was not as good as I hoped. The subject matter – occult aspects within rock music – is rich with possibilities; every obvious aspect is at least mentioned – Robert Johnson’s deal with the devil, the Beatles dabbling in Transcendental Meditation, The Rolling Stones lyrical relationship with Satan, Aleister Crowley’s connections to Jimmy Page (Crowley’s ideas are threaded throughout the book), the hidden meaning in Led Zeppelin albums, the Satanic imagery of heavy metal, alternative spiritual ideas, even Jay Z and the Illuminati symbolism.

But nothing is covered deeply. It’s written in an art-based language instead of what I would have preferred – a historical and sociological framework (surprisingly, since Berbegal is an expert in religion and culture). I just did not enjoy the language he uses. Here’s an example:

“Art and music were the vessels for both the Romantics and the hippies. The piper at the gates of dawn was playing his panpipe for those who needed to hear. And the youth of the 1960s were pulled towards it like a siren song. There was no turning back. Rock culture was not inhabited by a Romantic soul that looked to the gods of the past. And like the Romantic poets who were their forebears, rock muscians crafted music that did more than tug at the heartstrings of teenagers. It was music that urged them towards transcendence, towards creating their own inner landscapes and exploring the antipodes of their minds.”

Such rumination is fit for the intro and conclusion but not what I wanted to read in the informational body of the text.

I did like the section on David Bowie very much. But several long parts of the book were more about drug use than occult ideas. It seemed to go off on tangents and be missing a strong focus and factual information that I would have preferred. Many music culture fans will find this book pleasing, my personal preference notwithstanding. So, your milage will vary.

Warnings of impending danger: Science and Social Factors

This is a paper I prepared for an ethics graduate class and have updated (7-June-2014). I present it in conjunction with a Strange Frequencies Radio podcast appearance on Sunday June 8.

Natural disasters happen every day. The people who can help prepare society for them are not psychics or crank pseudoscientists but those who study events inside out and upside down – scientists. Those who consider prediction a part of their research and responsibility range from weather forecasters to seismologists and volcanologists.

It’s a great responsibility to be tasked warning officials and the public about probable natural disasters. Warnings of impending danger cause predictable social and economic effects that must be considered along with achieving the primary goal, which is safety and minimizing loss of life. If a disaster prediction is wrong, several million people might be unnecessarily affected (Olsen, 1989 p. 107) and the region may suffer economic losses. If it is correct, but delivered inadequately, disaster is inevitable.

Accuracy of predictions is based on what is possible to observe and data that can be collected. For example, hurricane predictions are very accurate because scientists have extensive weather instruments and well-tested forecasting techniques to use. Volcanic hazard areas and shorelines prone to tsunamis are mapped based on zones identified through historical records – scientists can find geologic evidence that the land was affected by lava, ash or debris flows or inundated with waves of debris.

For many predicted events (volcanic eruptions, hurricanes, floods, blizzards), there is time to deliver the message and adequately prepare for the event. The worst situation is certainly earthquakes. There are no widely accepted precursors for quakes. Reliable prediction are long-term and large-scale — relatively unhelpful for preparation. With the potential for large seismic events to kill huge numbers of people, earthquake prediction theories have been particularly problematic and fraught with ethical dilemmas for the scientific community, public authorities and media.

It’s important to distinguish between predictions from the scientific community and those arising from the nonscientific community (pseudoscientific speculation, psychics and cranks). Scientific predictions must be supported by background theory and data and withstand skeptical scrutiny to be considered credible. The foundation mechanisms, explanations, calculations and assessments are expected to have gone through the gauntlet of peer review in order to gain acceptance. If the foundation is valid, then short-term, specific predictions will be credible. Predictive successes that have followed the conventional route include volcanic evacuations (Mt. St. Helens, Mt. Pinatubo in the Philippines, and the island of Montserrat) and severe weather alerts. Psychic and pseudoscientific predictions are not supported by theory or data and are not credible. I’ll not be addressing the ethics of those predictions as they are in a whole other realm.

Failed predictions fall on an impact scale from low (creating public inconvenience) to high (massive death tolls) with economic losses and potential career destruction in between. The following are some notable examples that highlight the major pitfalls inherent in predicting (or ignoring predictions of) natural disasters.

The Brady-Spence Debacle

In 1976, Dr. Brian Brady, a U.S. government scientist, made a specific prediction for a huge seismic event to take place in Lima, Peru in July of 1981. While the prediction itself was remarkably detailed, the theory supporting it was completely opaque (Olsen, 1989 p. 41). Brady’s theory had not been tested or published for peer review. During the lead up years to the event, things got complicated. Egos, priorities, agendas and protocol hijacked opportunities for proper, coherent, scientific critique. Peruvian officials and the public were confused by the lack of a reliable feed of information. The unstable political situation at the time led Peruvian citizens to think that their government was using the prediction to continue military control (Olsen, 1989 p. 131; Sol & Turan, 2004). The predicted quake did not occur. But, widespread disorder, decline of tourism, decrease in property values, and general public unrest resulted in an estimated economic damage in Lima of $50 million (Mileti & Fitzpatrick, 1993 p. 55).

The lack of following scientific protocol led to the situation getting out of hand. This episode is an example of a loss of objectivity by the chief scientist, the failure of the scientific community to address a serious situation in a coordinated way, and government agencies accepting rumors and pursuing misguided agendas without accurate information.


In 1985, Columbian scientists knew that villages in the valleys around the Nevado del Ruiz volcano were prone to disaster from eruptions. Yet, money was not allotted by the government to monitor the active volcano. The data that could be collected was ignored or not taken seriously by officials. When the media reported that an eruption would produce deadly mudflows that would obliterate the village of Armero, civic leaders called these press reports “volcanic terrorism”.

Church leaders added to the propaganda by telling people of the village not to fear. The poor population made no preparations to evacuate. Inevitably, the volcano erupted. That night, those who attempted to evacuate did not know where to go. Civil defense tried to get people out of the town but many refused to go – telling rescuers they were certainly mistaken. 23,000 people perished when a flood of meltwater and warm mud buried the town. Armero no longer exists, bodies were incased in dozens of feet of debris.

Government inaction in this entirely preventable situation was devastating. The situation was a heartbreaking testimony to the vulnerability of the poor to manipulation by authority  (Bruce, 2001).

Browning’s New Madrid prediction

Iben Browning was a scientist with unconventional ideas who took his claim directly to the media who gave it wide coverage. He pronounced that an earthquake on the New Madrid fault in the US Midwest would be triggered in December 1990 by tidal forces. In light of his prediction, serious social disturbances occurred. When the quake did not occur, he was ridiculed. Sol & Turan (2004) note that one can not use the defense of free speech to support predictions such as this since they create social disturbances with harmful consequences. Your speech has consequences.

Mr. Browning rejected scientific protocol and valid criticism but used the press to create a stir. While these actions were unethical if one subscribes to the ideals of the scientific community, the media also shares some blame for giving Browning’s opinion credibility it did not deserve. Several cranks persist in using this same “tidal forces” idea, unsupported by science, to gain attention from the media.


Hurricane Katrina in 2005 was the costliest and one of the deadliest hurricanes ever to hit the United States. A US House Committee (2006) investigated the catastrophe and found, though the forecasts were remarkably good, the right information did not get to the right people on time and decision-makers seriously underestimated the threat.

It was well known how vulnerable New Orleans was to hurricanes yet there were inadequate provisions, few acts of leadership, government ineptitude, misguided advice, and media hype of violence that together resulted in a pathetic governmental response and heightened death toll. Katrina also revealed ugly issues of race and class treatment which showed that being poor and black put one at a distinct disadvantage in a disaster situation. Previous federal government cuts for disaster preparedness had increased the vulnerabilities and taught a hard lesson about paying now or paying later.

Boxing Day Tsunami

The Sumatra-Indian Ocean tsunami of 2004 was an example of lack of coordinated monitoring, notification and evacuation procedures that caused an enormous and mostly preventable loss of life (Revkin, 2004). Fifteen minutes after the offshore quake that generated the deadly tsunami, U.S. scientists at the Pacific Tsunami Warning Center in Hawaii sent out a warning bulletin. In spite of attempts they made to contact counterparts in other countries, the calls were not answered; the information and warning did not get through. Thousands died along populated coastlines completely unaware of the incoming surge scientists knew was coming.

Back in 2003, Dr. Phil Cummings of Australia had pushed for an expansion of the tsunami network into the Indian Ocean. Formation of a study group was met with resistance from participating countries and the network was never expanded. In hindsight, it was noted that Dr. Cummings had accurately predicted the damage that would be done to Sumatra and India. This event put the new word “tsunami” into the vocabulary of many citizens around the world.

L’Aquila, Italy

Giampaolo Giuliani forecasted the 2009 L’Aquila earthquake in Italy based on radon ground emission readings – a scientifically questionable (but not outlandish) theory. Giuliani was reported to authorities for “spreading panic” by broadcasting his warnings weeks before the predicted event. Italian scientists assured the townspeople that quakes were not predicable and officials forced Guiliani to remove warnings from the internet (Neild, 2009; Mackey, 2009). When the predicted quake did not occur on the expected date, March 29, the Italian Civil Protection Agency denounced Guiliani as “an imbecile” (Israely, 2009). A quake occurred on April 6 destroying the central city of L’Aquila and killing more than 300 people.

In this case, a desperate scientist had made an attempt to do what he thought was the right thing. The government agency chose to use ridicule and censorship instead of providing a measured, coordinated response to a questionable scientific prediction. What might have been the result if a different tactic was undertaken?

In 2012, an Italian court convicted six of the scientists and a government official of manslaughter for failing to give adequate warning of the deadly earthquake. Were they at fault or just mistaken? What happens when scientists are held THIS accountable for a correct guess in an uncertain situation? The public will suffer.

The parties involved

Most crises are not instantly obvious. They take time to develop, sometimes from vague or contradictory signals (Boin & t’Hart, 2006 p. 49). Citizens expect public official to make critical decisions, provide direction and issue emergency warnings (Barberi et al., 2008). Because they are not experts on scientific topics, officials are vulnerable to misunderstanding and mischaracterization (Olsen, 1989, p. 38 and 139). Social scientists note “the public wants to hear things from people they trust” and “they want to hear things repeated”. Miscommunication can occur all too easily when an official speaks outside his area expertise and/or garbles the message. Constant, and correct communication is the key.

Predictions have a way of leaking to the press. The media can be an effective and critical means to deliver warnings and will look to experts for information and confirmation. Scientists, however, have not traditionally been open to making themselves available to address the public. One can argue that it is their ethical obligation to be accessible in such a situation and they MUST do so to establish and retain their place as a credible source of information. Otherwise, alternate, not-so-credible sources step in to fill the void.

New electronic media means word-of-mouth takes on a whole different scale as warnings from credible and non-credible sources are passed instantaneous around the world. “Prediction” via email or social network platforms is popular. Likely unaware that a warning is scientifically baseless, and without an easy way to judge its credibility, a receiver feels that she is doing a good deed by passing on a warning of impending doom. Warnings like this can cause undue concerns and economic effects.

The elemental question in predictive scenarios is: when is the evidence adequate to make a prediction to the public? Many prognosticators feel they have potentially life-saving information and are overcome with a moral obligation to inform the public regardless of protocol. They can’t seem to adequately assess the potential fallout if they are wrong. The public, however, considers costs of all kinds and is not always compelled to follow scientific advice. The public may be misled by a manufactured scientific controversy (such as vaccine dangers or global warming).

Science gets accused of oppressing unorthodox ideas that may form the basis of innovative prediction theory. The punishment for a scientific maverick can mean the end of a career. Desperate scientists with unorthodox ideas, rejected by their peers, will put forth their ideas to the community who will listen – the media and public.

The modern public generally has veneration for science and scientists (Posner, 2004 p. 97; Barberi et al., 2008). Yet, science can not deliver absolutes or provide guarantees. The prediction scenario must take public perception into account or the prediction will cause harm whether the event occurs or not.

The world’s most vulnerable population is the poor. Keys et al. (2006) asserts that expensive warning systems are a hard political sell if it is just to save the poor populations.

Governments and citizens will hesitate to undertake precautions that are expensive and time consuming. The public, however, is influenced by seeing others in the community (or, these days, online) taking a warning seriously (Mileti & Fitzgerald, 1993, p. 87). Where the people are poor, uneducated or distrustful of government (Bolin, 2006 p. 129), there can be a reluctance to accept an “official” warning to evacuate. People who feel they are in control of their lives take action to survive. Those who feel their lives are controlled by an external force will passively await whatever fate will come. Fatalistic attitudes, especially as a result of religious beliefs, are still encountered today, most notably in poor populations (Quarantelli et al., 2006 p. 19, and Bruce, 2001 p. 19). Leaders must be forthright to convince citizens to take the most reasonable course of action. Compassion for personal human concerns must be displayed for a warning to be heeded. Government must be prepared to follow through with obligations to the population whether the event occurs or not.


Many predictions are valid attempts to do the right thing under uncertain circumstances. There are social and political reasons why a prediction is taken seriously or completely ignored. The media and public may give a baseless prediction credence where the scientific community does not.

When the public, media and politicians become involved, a prediction becomes socially complex. Warnings must be delivered in relation to social conditions (Rodrigues et al, 2006b p. 486).

Government and scientists have an obligation to learn from historical events and not repeat mistakes. Even false alarms do not diminish future response if the basis and reasons for the miss are understood and accepted by the public (Sorensen & Sorensen, 2006 p. 196-7). Therefore, authorities should be willing to prepare their citizens without hesitation if the prediction is supported by science.

Science has an established process to be followed for a theory to gain acceptance. Scientists should be discouraged from short circuiting this process and appealing directly to the public. However, the scientific community must evolve its process to include modern technology and the new media in consideration of basic human needs and various responses to life-threatening events.

Barberi, F., M.S. Davis, R. Isaia, R. Nave, T. Riccia (2008). “Volcanic risk perception in the Vesuvius population.” Journal of Volcanology and Geothermal Research 172: 244 – 258.

Boin, A. and P. ‘t Hart (2006). “The Crisis Approach”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 42-54.

Bolin, B. (2006). “Race, Class, Ethnicity, and Disaster Vulnerability”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 113-129.

Bourque, L. B., J.M. Siegel, M. Kano, M. M. Wood (2006). “Morbidity and Mortality Associated with Disasters”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 97-112.

Bruce, V. (2001). No Apparent Danger. NY, Harper Collins.

Bryant, E. (2005). “Personal and Group Response to Hazards”. Natural Hazards, Cambridge Univ Press: 273-287.

Hinman, L. M. (2005). “Hurricane Katrina: A ‘Natural’ Disaster?” San Diego Union-Tribune. San Diego, CA. Sept. 8, 2005.

Israely, J. (2009) “Italy’s Earthquake: Could Tragedy Have Been Avoided?” Time Retrieved April 7, 2009 from,8599,1889644,00.html.

Johnson, B. F. (2009) “Gone and Back Again”. Earth (07 Apr 2009) Retrieved April 20, 2009 from

Keys, A., H. Masterman-Smith, D. Cottle (2006). “The Political Economy of a Natural Disaster: The Boxing Day Tsunami, 2004.” Antipode 38(2): 195-204.

Mackey, R. (2009). “Earthquake Warning was Removed from Internet”. NY Times News Blog (The Lede) (06 April 2009) Retrieved April 6, 2009 from

Mileti, D. S. and C. Fitzpatrick (1993). The Great Earthquake Experiment. Boulder, CO, Westview Press.

Neild, B. and G. Deputato (2009) “Scientist: My quake prediction was ignorned”. (06 April 2009) Retrieved April 6, 2009 from

Olsen, R. S. (1989). The Politics of Earthquake Prediction. Princeton, NJ, Princeton Univ Press.

Posner, R.A. (2004). Catastrophe: Risk and Response. Oxford Univ Press.

Quarantelli, E. L., P. Lagadec, A. Boin (2006). “A Heuristic Approach to Future Disasters adn Crises: New, Old and In-Between Types”. Handbook of Disaster Research. H. Rodriguez, E.L. Quarantelli, R. R. Dynes. NY, Springer: 16-41.

Revkin, A. C. (2004). “How Scientists and Victims Watched Helplessly”. New York Times. December 31, 2004.

Rodriguez, H., E.L. Quarantelli, R. R. Dynes (2006a). Handbook of Disaster Research. NY, Springer.

Rodriguez, H., W. Diaz, J. Santos, B.E. Aguirre (2006b). “Communicating Risk and Uncertainty: Science, Technology, and Disasters at the Crossroads”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 476-488.

Scanlon, J. (2006). “Unwelcome Irritant or Useful Ally? The Mass Media in Emergencies”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 413-429.

Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina (2006). “A Failure of Initiative”. Washington, D.C., US House of Representatives.

Sol, A. and H. Turan (2004). “The Ethics of Earthquake Prediction.” Science and Engineering Ethics10(4): 655-666.

Sorensen, J. H. and B. V. Sorensen (2006). “Community Processes: Warning and Evacuation”. Handbook of Disaster Research. H. Rodriguez, E. Quarantelli, R. R. Dynes. NY, Springer: 183-199.

USGS (1999). “Most Recent Natural Disasters Were Not the Century’s Worst, USGS Says.” News release – US Dept of Interior, USGS (Geologic Hazards) (30 December 1999).

* I use the term prediction throughout this post since I am referring to the cases where a particular event was said to occur within a discrete time frame in a certain location. Please see this post in which I distinguish forecasting from prediction.

Originally published on this blog on 28 Mar 2011

Getting noticed for not calling people stupid

Two observations today: one positive, one negative. Want to make an impact with your skeptical commentary? You TOTALLY can.

First, an unexpected effect. I was contacted by researchers in Japan who saw one of my Doubtful News articles railing against media who published a baseless story about a woman who claimed MSG (monosodium glutamate) in many foods resulted in a glutamate imbalance that caused autism and other neurological disorders. The article that I cited was published on Fox News but they had pulled it from the San Francisco Chronicle. It was copied without much additional info to several other sources. It was worse than “false balance”. Even though the original article mentioned no scientific research supports this claim, that point was lost in the scary headline. The researcher who contacted me noted that my piece was the only one that was openly critical about the story. That was the gist of my piece – one person (supported by some ridiculous autism woo woo sites) has a zany idea and that is considered news? That is fear mongering for no reason and it’s a problem in our society.

The researchers, who were affliated with a big e-commerce company in Japan, were interested in the market image of MSG in the US and other countries. I was able to provide some informed opinion about food fads and fallacies that I learned through my work on Doubtful News and by through skeptic-based health media.

I consider the exchange with market researchers, as well as my various contacts with reporters and journalists, a direct effect from blogging a science-based point of view, building a web presence, and appearing high in search results. How about that!

Google search results
Top search results in Google for MSG+autism

I really don’t think I’d get so many requests for exchanges if I was one of those asshole skeptics. While talking to other science-minded people about my interest in the paranormal and why people believe, I too often hear dismissiveness. And worse, I hear paranormal beleivers being called “stupid”, “idiots”, “moron”, and the like – that they deserve to lose their money or waste their time because they’re dumb. No. That would be YOU who are dumb. It’s well-established that paranormal belief or buying into questionable claims is NOT a sole result of education and IQ. Smart people believe a lot of nonsense things.

I find great value in my discussions with pro-paranormal people. By treating them with respect and finding out about their opinions, I can better understand the subject completely and work to change misperceptions. If I went around yelling that “BIGFOOT DOESN’T EXIST, you idiot” or “How can you be so stupid to think that a place is haunted?” I would be exactly as obnoxious as the people who regularly scream at me on blog comments and email telling me to “Get educated” or “Shut up about stuff [I] know nothing about.” Yeah, I get that a lot. I’m not going to go down the name-calling road. It makes me hit delete so what do you think happens when we do the same?

I’m pissed that skeptics are still thought of as curmudgeonly, closed-minded, know-it-alls. No wonder people dislike them. Many do seem to be complete assholes. The answer to why people subscribe to paranormal or fringe beliefs is far more complicated than “they’re stupid”.

Deal with the claim, not the people. And I still follow the trope “don’t be a dick”. It actually works.

Sciency, scientifical and wackadoodle are now official

New words have been added to the Oxford English dictionary, the “definitive record of the English language”, including a few near and dear to me…

New words list March 2014 | Oxford English Dictionary.

  • bookaholic: Yes, I am a minor sufferer.
  • Coney dog*: I very much enjoy these and have since I was a kid.
  • demonizing: This word is getting around, overused, just like “evil”.
  • do-over: I like this word, employ it often.
  • ethnozoology*: A technical term for the actual scientific part of cryptozoology. [Definition given as “The traditional knowledge and customs of a people concerning animals; the scientific study or description of this.”]
  • sciency*: This is one of my words, obviously. But they spelled it wrong. Sounds Sciencey [Definition given as “Of a somewhat scientific or technical nature; (also) having an interest in or aptitude for science.”] The “somewhat” is important.
  • scientifical method*: I wish I knew what they meant by this versus the scientific method! [Definition given is as an older use meaning “scientific method”] *pffth*
  • scientificality*: Ditto. [Definition given is:  1. A scientific or technical issue, term, or detail. 2. The property or quality of being scientific.] For the 2nd def – I used the word “scientificity” but that’s not been recognized.
  • scientificness*: Ditto. [The quality of being scientific.] Ok, boring.
  • Scientological*: This was capitalized so I am REALLY curious. [Yep, having to do with Scientology.]
  • sword and sorcery: Cool!
  • wackadoo*: Citation needed. [Definition given as: A. Crazy, mad; eccentric. B. An eccentric or mentally unbalanced person; a crank, a lunatic.]
  • wackadoodle*: Love this word. On my list of favorites. [Definition given as the same as wackadoo although this does sound like a crazy poodle.]

As you can figure, the access to OED is paid and I don’t have a subscription which sucks. Can you help me out if you do and post the meanings to the 9 starrred words? I’d appreciate it. I want to be all definitive, you know. Thanks to those that sent the explanations to me!

Scientific people use words and their meaning properly. Scientifical people do not. I don’t want to just look sciencey, I want to get it correct.

You can also email paskeptic(at) Thanks.