Twenty years ago, a study found that medical and health information available on the internet was unreliable and error-laden. Today, a new study by a University of Kansas professor shows that our current youth continue to face the same wasteland of internet quackery and remain just as vulnerable to misinformation as before.

In the June 1998 issue of Pediatrics, physicians McClung, Murray and Hietlinger investigated “The Internet as a Source for Current Patient Information.” They selected the topic of acute childhood diarrhea—a problem that can be fatal to young children—and searched the top 60 web sites.

Despite these web sites being run by medical professionals, university hospitals and health news services, 80 percent provided wrong or out-of-date information. Out of the top 300 search engine results, these 60 were selected from what appeared to be mainstream or medically credible sources. But only 12 of those 60 websites provided information that matched the medically-accurate diagnosis and prescription of the American Academy of Pediatrics for management of childhood diarrhea.

Subsequent research demonstrated that faulty websites pushing medical quackery cannot be distinguished by their .edu, .com, .org or .gov addresses either.

In a recent news release, research at K.U. shows that this situation has not improved. In “Just Google It: Observing Youth Searching for Health Information Online,” Susan Harvey, an Assistant Professor of Health, Sport & Exercise Sciences observed youths to determine how they search for and locate online health information.

Professor Harvey surveyed the students’ own perceptions of their ability to search for online information as well as their belief in their ability to judge the quality of the website. As you might expect, our new generation is quite confident about finding online information. Overconfident.

They also felt they could detect valid information and distinguish it from bogus sites. They couldn’t.

While her student subjects searched for information on the Internet, she used tracking software to record the sites they accessed. And they told Professor Harvey their thought processes while they searched.

In the news release, Harvey described how “Most of them didn’t scroll through the webpages at all, they just clicked on the first link.” She continued “And many of them found their information from sites that weren’t credible. When they did click on credible sites, like the National Institutes of Health, they clicked off of it very quickly.”

Harvey found that students averaged only about 20 seconds on the credible sites and attributed some of this to “...the unreliable sites were just more visually appealing for them.” However, she also blamed the accurate websites for writing the information at too high of a grade level for the students to understand, and recommends the quality sites lower their reading level.

Unfortunately, U.S. schools went through a five year “reform” of removing technical language from science books in the late 1990s. That disaster proved that technical language is necessary for comprehension of concepts. And reading levels, determined by letter and word counts, do not reflect students’ ability to read higher than their grade level when interested.

This new study repeats familiar recommendations for more teacher training in online literacy and in detection of unreliable sites. However, the infamous University of Connecticut “tree octopus” study long ago showed that there is no universal method for detecting bogus online information. And even worse, once students “learn” wrong information, they own it and will not change their minds.

Today, K-12 schools nationwide are throwing away carefully-reviewed, accurate textbooks and sending their students into the vast internet wasteland. Therefore it is not surprising that in the annual “Education Counts” just issued by Education Week, “79 percent of principals are ‘moderately’ or ‘extremely’ concerned about their students’ inability to gauge the reliability of online information.”

Simply, if there was a god-like method to separate good online information from bogus, we would all be using it.