Lying To The Lab Coats

We’ve all read reports on medical studies that have reached significant conclusions about the consequences of certain behavior or the causes of physical or mental conditions. One question about those studies always lingers: if one of the elements of the study is self-reporting by participants, how do we know that the participants are really being truthful in what they are reporting — or, whether they are lying to the lab coats instead?

A recent discovery of misreporting by participants in a genetic study of the effects of alcohol consumption highlights the concern. Researchers determined that participants in the UK Biobank that provided the data for the study often underreported their use of alcohol and did not provide accurate information about their consumption over time. (The UK Biobank includes data from 500,000 volunteers who have, since 2006, agreed to be periodically questioned and tested about various activities and conditions.)

Even worse, the false information caused the researchers in the genetic study to reach inaccurate conclusions about alcohol use and its association with certain health conditions. When statistical analysis techniques were used to scrub the Biobank data of false information, for example, negative correlations between alcohol consumption and diseases like anemia, hypertension, and type II diabetes were significantly reduced — in some cases to near zero.

It’s not clear from the article linked above precisely how the researchers discovered the underreporting, but the fact that study participants lied to the lab coats about their use of alcohol shouldn’t surprise anyone. Human nature tells us to be dubious of the scrupulous accuracy of self-reported information on any potentially embarrassing topic — whether it’s smoking, drinking, daily exercise, amount of TV viewing, or consumption of ice cream and potato chips. The next time you read about a study that reached startling conclusions about something, take a look at how the data was generated, and if self-reporting was involved, consider whether the nature of the study might have tempted participants to fudge a bit in their reporting. And let’s hope the lab coats do likewise.

“Strong” Password Follies

You may have seen this already, but if not, brace yourself:  everything you’ve been told about creating “strong” passwords has turned out to be wrong — or at least misguided.

Meet Bill Burr.  In 2003, he was a mid-level manager at the National Institute of Standards and Technology who was asked to create guidance on the development of computer passwords.  Burr then authored an eight-page manifesto, enticingly called “NIST Special Publication 800-63, Appendix A,” that articulated two by-now familiar rules that have frustrated computer users ever since.  First, you’re supposed to come up with passwords that feature both capital and lower case letters, numerals, and characters — so instead of a password like “Tubesteak” you’d have a “strong” password like “TubesTeak$123.”  And second, you need to change your password every 90 days.  Special Publication 800-63 quickly became a kind of bible for the IT geeks and was widely adopted by the large companies and organizations that employ us, causing us to need to develop new, creatively configured passwords on a regular basis.

51juanfwf9l-_sy445_So, what’s the problem?  Isn’t computer data security worth the hassle of occasional password changes that use the 800-63 rules to strengthen our defense against soulless computer hackers?

Perhaps it would be . . . except that Mr. Burr really didn’t figure human nature into his “strong” password analysis.  It turns out that people are pretty unimaginative when it comes to password development, so they end up using predictable approaches to their SP 800-63 compliant passwords, by substituting numbers or characters for the letters they resemble.  And people are forgetful, don’t easily remember their passwords, and don’t want to to be locked out of their systems due to a memory failure, so they write their passwords down, which just increases the security risk.  And, finally, hackers are clever, and can come up with software that anticipate the predictable rules people use to create those “strong” passwords.  All of  which means that the annoying NIST 800-63 rules lead people to create passwords that really aren’t that “strong” after all.

Mr. Burr concedes defeat, and says:  “Much of what I did I now regret.”  And NIST has come out with new guidance that encourages users to pick a string of random words and only change them in the event of a data breach.  (Of course, IT departments being what they are, it may take a while for the new rules to supplant the old.)

Who knows?  Maybe people will decide to use curious conventions, like the process you’re supposed to us to develop your “porn actor” name, to create passwords.  That naming convention says you combine the name of your first pet with the name of the street you lived on in grade school to come up with your “porn actor” name.

That would make me “GeorgeOrlando,” which wouldn’t be a bad password.  I can almost hear Allen Ludden whispering it now.