Research and Impact | News and Announcements

Julie Suhr named editor of Psychological Assessment, sets priorities for journal

Dr. Julie Suhr was recently appointed editor of the journal Psychological Assessment, highlighting her vision for the research journal on the American Psychological Association's Editor Spotlight.

"I hope that Psychological Assessment will continue to be a strong outlet for research in clinical psychological assessment under my editorship," Suhr said in her spotlight.

Suhr is professor of Psychology in the College of Arts and Sciences and director of clinical training of Ohio University’s APA-accredited doctoral program in clinical psychology.

"I am very excited that Dr. Suhr will be heading Psychological Assessment," Dr. Jeffrey Vancouver, professor and chair of Psychology, said. "The journal is the flagship measurement journal for the American Psychological Association, and it is one of the top journals in the field of clinical psychology (91st percentile of all clinical psychology journals, according to SCOPUS). Having the editor of a journal like Psychological Assessment among our faculty brings great prestige to the psychology department and Ohio University alike."

Suhr notes three areas where she would like to guide the Psychological Assessment journal: expanding submissions on the assessment process, validating research data, and paying attention to sociodemographic variables.

"I hope to see more submissions on the assessment process (decision making, clinical judgment); the development, validation, and application of measures and tests of transdiagnostic constructs; and research that is multicultural and multinational, especially work that is collaborative across multiple countries or cultures within the same submission," she said.

She also welcomes work on new areas of clinical practice, such as integrated primary care and telehealth.

To address the validity of the data collected by researchers, Suhr will "pay attention to the degree to which authors address this concern by including well-validated symptom validity or performance validity tests in their research batteries, checking for response biases (e.g., extreme responding, non-content-based carelessness), and removing participants who may be providing unreliable data that can lead to uninterpretable results and inaccurate conclusions."

Regarding diversity, Suhr wants researchers to detail the sociodemographic characteristics of their samples and how those characteristics might constrain the generalizability of their findings.

“At the same time, I encourage authors to be careful when interpreting any study findings related to sociodemographic differences and to consider all possible biopsychosocial contributions to those differences, including how demographic factors may be proxies for other environmental and cultural factors, including bias," Suhr said.

 

Published
December 8, 2021
Author
Staff reports