scorecardresearch Skip to main content

A personality test for dogs

And other recent highlights from the Ideas blog

A black Scottish terrier.

Is your dog a “Stargazer”? A “Maverick”? An “Einstein” in furry disguise? For $59.95, a new company called Dognition will diagnose your dog’s personality and explain how he thinks about the world.

The service, which launched on Feb. 5, is the brainchild of Brian Hare, an evolutionary anthropologist at Duke University and also the director of the Duke Canine Cognition Center. Hare received his PhD from Harvard under the tutelage of renowned primatologist Richard Wrangham; over the last decade he has become a prolific voice in academic and popular circles on the subject of how dogs think, and together with his wife, Vanessa Woods, he recently published the book “The Genius of Dogs.” In an interview with Science last month, Hare explained that he decided to turn his research into a business at the urging of some entrepreneurial colleagues. To use Dognition, you fill out a personality questionnaire about your dog on the company’s website and then receive a Canine Assessment Toolkit in the mail. The kit includes equipment and instructions for playing 10 games with your dog that will reveal aspects of his personality.


One test examines how closely your dog pays attention to gestures and can be used to place a pet on the continuum between “collaborative” and “self-reliant.” In another test, owners are instructed to yawn five times within in a minute and then watch to see if their dogs also yawn. In the Science interview, Hare explained that the exercise is a measure of “contagion,” a “precursor to

Hare is quick to point out that in dogs, as with humans, there’s no one “best” personality. Rather, he explains, understanding who your dog is and how he thinks can be a first step to bringing the two of you closer together.

From 2-D to 3-D, and back

"Rejuvenation" by Jonty HurwitzNiina & Pierrotto

The first time you look at Jonty Hurwitz’s sculptures, it takes a minute to realize what’s going on. The London-based artist uses a computer program to take a two-dimensional picture of an object and skew it into a grotesque, three-dimensional design that he fabricates out of metal. Then, by positioning the distorted sculpture in front of a cylindrical mirror, he’s able to draw the image back into its original form. On YouTube Hurwitz explains that his work is a statement about the ways that powerful algorithms — on Google, Facebook, Amazon — influence our behavior all the time. It’s unclear, however, whether there’s a similar reconstituting trick for our everyday lives.

Open that file!

How much can we trust medical research? Even if you don’t worry about researchers’ ethics, or the corrupting influence of corporate funding, you can still worry about the “file-drawer effect” — the tendency among researchers to publish their positive results, while tucking the results of failed studies in a file drawer and forgetting about them.

This is problematic for several reasons. For one, those buried failures can include scientific knowledge just as important as the successes. And two, negative results serve as a check against “false
positives”: Only if we know that an idea failed several times can we appreciate that its one published success might be a fluke.


But the structure of medical science is set up to keep those negative results in the drawer. Careers (and products) advance only with positive findings, and research journals tend not to be interested in publishing experiments that didn’t work. In the United States, the federal government officially expects researchers to post results of all clinical trials to a database within a year of conducting them, but the requirement is minimally enforced.

What do to? There are a number of proposals circulating that would address this problem. The latest is AllTrials (at, a petition launched last month by a group of organizations including the British Medical Journal and the Centre for Evidence Based Medicine, calling for all clinical trials to be registered in a central database before they begin. This would ensure that methods don’t change midstream, and that all findings — negative or positive — are made public.

Kevin Hartnett is a writer who lives in Ann Arbor, Mich. He can be reached at