Earliest Memories

The other day I was thinking about what I believe is my earliest memory. It’s a difficult thing to do, because typically human memories don’t quite work that way; it’s not as if they are kept in a chronological filing cabinet. Instead, memories seem to be stored in the brain in a way that causes them to be triggered by external phenomena: a song, perhaps, or a situation, or a physical setting might provoke an avalanche of recollection. It’s therefore possible that I have an earliest memory that just hasn’t been triggered yet.

That said, the earliest recollection I can muster involved sitting in a big leather swivel chair, next to my brother Jim, at our Dad’s office when he worked as a bookkeeper for a construction company. I remember sitting on the chair as we swiveled around, looking at a safe with a big combination lock and a handle that was kept in Dad’s office to store the cash receipts. We liked rotating the chair like a merry-go-round and messing with the big lock on the safe. I’m not quite sure why I have this memory–perhaps it was because we had never been to Dad’s office before, and it was interesting to see it–but it is definitely an old one. I’m not sure exactly when Dad worked at the construction company, but the time period would have been in the pre-kindergarten years, perhaps when I was three or four.

A recent study suggests that many people can identify memories dating back to the age of two-and-a-half, and that people also tend to misdate their earliest memories and assign them to later points in their lives. It isn’t clear why two-and-a-half seems to be the cutoff point–perhaps the brain just isn’t ready to begin significant storage before then, or perhaps the things that are happening before that age aren’t specifically memorable–but the authors of the study suggest that if you want to try to remember your earliest memories, you just need to work at it, because summoning up early memories often has a kind of cascading effect. But be careful: studies also suggest that what many people think is their earliest memory is fictional, particularly if it goes back beyond the age of two or so. Those “memories” often aren’t true memories, but instead are descriptions of family photographs or ingrained family stories that have been implanted in the brain over the years.

I’m pretty sure my swivel chair memory is a true memory, and not a later implant, but of course there is no way to know for sure. The “earliest memory” issue does make you realize that your brain is kind of like your grandmother’s attic, with all kind of weird stuff stored up there, and you’re not quite sure why some memories got stashed and others didn’t.

The Scientific Pursuit Of Happiness

Scientists have been analyzing happiness for a long time–probably for as long as “science” has existed as a discipline separate from philosophy or religion. The basic questions being explored are straightforward: Why do some people seem to be happier than others? How much personal happiness is genetic, and how much is the product of environment or intentional activity? These age-old questions have taken on added urgency recently, with so many people in the modern world struggling with depression, stress, and anxiety–and COVID isn’t exactly helping, either.

A recent article summarized the current scientific landscape on the analysis of happiness. It notes that the modern framework for the analysis was set by a 2005 article in General Psychology called “Pursuing Happiness: The Structure of Sustainable Change.” The summary of that article describes its analysis as follows: “surprisingly little scientific research has focused on the question of how happiness can be increased and then sustained, probably because of pessimism engendered by the concepts of genetic determinism and hedonic adaptation. Nevertheless, emerging sources of optimism exist regarding the possibility of permanent increases in happiness. Drawing on the past well-being literature, the authors propose that a person’s chronic happiness level is governed by 3 major factors: a genetically determined set point for happiness, happiness-relevant circumstantial factors, and happiness-relevant activities and practices.”

Only scientists would use a phrase like “chronic happiness level.” But stripped of the scientific verbiage, the article posited that some element of individual happiness is determined by genetics and therefore beyond your control, another element is based on your environment, and yet another element is based on activities and practices that affect your happiness–activities and practices that you can control. The 2005 article even attributed percentages to each of the three elements, with 50 percent of the variance in happiness attributed to genetics, 10 percent to environment, and 40 percent to activities and practices. This 50-10-40 hypothesis was seen by some as a “happiness pie.”

As with any scientific hypothesis, the “happiness pie” analysis has been criticized, primarily on the ground that it is pretty hard to distinguish genetic factors from environmental factors. One 2019 article in the Journal of Happiness Studies (yes, there evidently is such a publication) noted: “We conclude that there is little empirical evidence for the variance decomposition suggested by the “happiness pie,” and that even if it were valid, it is not necessarily informative with respect to the question of whether individuals can truly exert substantial infuence over their own chronic happiness level.”

It’s the scientific equivalent of the theological argument about how many angels can dance on the head of a pin. But there does seem to be consensus on three basic propositions: (1) genetics play a role, and some people are genetically disposed to be in a happier frame of mind than others; (2) your environment has an impact on happiness; and (3) what you are doing at a particular point in time–such as running through a sprinkler on a hot summer day, like the happy kid in the photo above–can affect your happiness.

In view of that, what’s the point of arguing about what percentage of happiness should be assigned to each of those three factors? You can’t control your genes, and you can’t control how your environment shaped you when you were growing up. But you can identify what you enjoy–whether it is exercising, listening to your favorite music, spending time with friends and loved ones, volunteering, or some other activity–and try to work those activities into your day. And, in big-picture terms, you might be able to change your environment going forward to a place or setting that is more likely to make you happy, too. And part of changing your environment is identifying what makes you unhappy–like jerky behavior on social media, for example–and trying to change or avoid it.

So why debate percentages? If trying to structure your day to maximize the conduct and activities that you really like can make you happier–even if it is only an incremental increase–why not do it? What have you got to lose?

Selfie Psychosis

We are learning more and more about people who have a “selfie” obsession.  We know that people taking selfies are at greater risk of having serious, and even fatal, accidents because they are oblivious to their surroundings while they are taking pictures of themselves on streets or, say, at the edge of the Grand Canyon.  We’ve also seen evidence that people who take selfies are so self-absorbed that they don’t show the decency and sensitivity you typically would expect from a fellow human being.

Woman taking a selfieNow new research is indicating what seems like a pretty obvious conclusion:  people who take selfies are more likely to undergo plastic surgery.  The connection is even stronger if the selfies are taken with filters, or if the posters regularly take down selfie postings that they later conclude aren’t very flattering.  Cosmetic surgeons are reporting that members of the selfie crowd are coming to their offices with selfies where the features have been digitally altered and asked the doctor to change their appearance to match the altered image.

It shouldn’t come as a surprise, I suppose, that people who take selfies are narcissistic and are interested in changing their appearance to try to reach their own definition of personal perfection.  After all, if you spend your time constantly looking at your own pouting face, you’re bound to notice a few imperfections to be cleaned up.  The selfie-obsessed also tend to compare their selfies with the countless other selfies that appear on social media feeds and find their looks wanting.

As one of the plastic surgeons quoted in the article linked above notes, that’s not healthy behavior.  It’s the kind of behavior that those of us who don’t take selfies, and indeed don’t particularly like to have their photos taken at all, just can’t understand.

But we’ll have to, because the selfie epidemic seems to be getting worse, not better.  Researchers estimate that 650 million selfies are posted every day on social media.  That’s a lot of potential plastic surgery.

The Changing Focus On Fathers

For much of its history, psychology has been no big friend of fathers.  The focus was on the importance of the mother, and fathers were lurking there somewhere in the background as one of the many other influences that could shape a person.

16Several decades ago, however, the perception began to change, and psychologists began to reassess the significance of fathers.  Now, research indicates that fathers play a key role in creating an atmosphere of personal security in which children can gain confidence, in helping children to develop through creative and unstructured play — this means running around, making up games, and doing silly stuff, in non-psychologist speak — and in demonstrating, through their involvement, the importance of education and proper adult relations with others in the world at large.  In one recent study, for example, fathers were found to have an even greater impact on child language development than mothers.

It’s kind of weird to think that psychologists ever diminished the role of fathers; it seems obvious that children would be shaped by observing and interacting with the other parent in the household.  It’s interesting, too, that the shift in perception of fathers has occurred as the number of households without fathers has increased, and statistics are showing that the absence of a father as a permanent member of the family can have lasting negative social and economic effects.  Reality finally is trumping early psychological theory.

None of these studies and discoveries come as a surprise, I’m sure, to actual people.  Kids who grew up in traditional households understand the importance and influence (good and bad) of both mothers and fathers.  Every father I know thinks that role is an important one — although they may wonder whether their judgments are sound and wish there was an instructional manual that provided guidance on how to deal with some of the situations that arise.  The bottom line is, we just do the best we can and hope.

Happy Father’s Day!

The Psychology Of The Two-Urinal Rule

Every guy knows this basic rule about the use of a public bathroom: if someone else is using one of the bank of urinals, you need to choose a location that leaves at least one urinal between you and the other user. It’s one of those social conventions that is so widely accepted that you really notice a breach.

This week The Atlantic has a fascinating article about the psychology of the two-urinal rule and other phobias and taboos about the use of public bathrooms. I was unaware, for example, that there was a formal name for the condition that causes people to have anxiety about using a public bathroom to do “number one” — it’s called paruresis — and that affects about 20 million Americans to some extent or another. (The analogous condition about “number two,” called parcopresis, is far less common.)

IMG_4196Interestingly, men seem to be more troubled about use of public bathrooms than are women, and the free-standing, out-in-the-open urinal apparently is a significant part of the problem. Studies show that men worry that they are being watched while they are standing there doing their business, whereas women — safely seated in a flimsy yet shielded stall as they answer the imperative — tend to worry more about cleanliness and comfort. Some men’s rooms are now being designed with partitions between individual urinals to try to address the perceived privacy problem.

The article notes that, even in our wide-open culture, there are still many taboos and rigid behavioral norms about using a public bathroom — even though the notion of privacy while excreting is a fairly recent development in the long history of humans. We tend not to talk to anyone when we are inside. We don’t make eye contact with other users, and in fact strive to maintain a state of studied indifference to their very existence. And, of course, we do our best to ignore the sights, smells, and physical conditions in the bathroom and the fact that the facilities are being used by complete strangers for unpleasant but essential bodily functions.

If you use public bathrooms all the time, you incorporate these norms and obey them, accept the fact of bodily imperatives, and forget about it. For some people, that’s harder than for others. So if the guy ahead of you in the line for a urinal at the next Browns game seems to be taking a while, give him a break — he’s probably doing his best while dealing with the weight of some deep-seated psychological issues.

When The Nutdar Kicks In

For the most part, we live our lives in little spheres of sameness — where we live, where we work, where we go to school, where we go out to eat. Occasionally, though, we have to move outside of those spheres, and when we do, it helps to have the nutdar in good working order.

This afternoon I needed to go to the Kroger pharmacy near our house. When I got there, the prescription wasn’t ready, but the pharmacist told me it would only be a few minutes. There was a little waiting area with three chairs next to each other in a row, two of which were already occupied by a young guy with close-cropped hair and an old guy wearing a leather motorcycle jacket who just sat down. My choices were to sit between them, or stand. When I considered the options my nutdar kicked in and told me to stay clear, so I moved a distance away and checked my email without making any eye contact with my fellow pharmacy customers.

Sure enough, a few moments later the young guy and the old guy started an unnervingly loud conversation about drugs and their health problems. The young guy spoke in rapid-fire cadence and seemed wired to the hilt, like a character in a Hunter S. Thompson book. In just a few minutes his booming voice covered why he didn’t trust generic drugs, his fear that he’s had multiple heart attacks, a rumor he heard that the DEA had shut down a local pharmacy for violations of federal drug laws, and some kind of mechanical problem he was having with a motorcycle that he had bought on Craig’s List in a “rip-off” deal. The old guy, not to be outdone, chipped in with ringing declarations about his various ailments, suggestions on drug that the young guy could take to deal with those apparent heart attacks, and a diagnosis of the motorcycle issues. I tried not to listen, but it was impossible to avoid.

I don’t know if these guys were dangerous or harmless, but it was the end of the day and I didn’t want to find out. I’m glad my subconscious kept me away from them. It’s nice to know that my nutdar is still in prime working condition.

Looking In The Mirror, And Hearing Your Own Voice

I have a weakness for learning about human psychology.  How do humans think?  What approaches are more and less likely to cause the listener (or reader, for that matter) to have the intended reaction?  I think it is fascinating stuff.

One reason the results of psychological studies and experiments are so interesting is that it’s easy to translate the information to your own experience.  It’s like looking in a mirror.  It’s impossible not to consider how you match up with the results.  It’s nice when they indicate that your modus operandi is sound — but it’s hard to take when the data reveals that your approach is hopelessly wrong.

We all look in the mirror countless times a day, but often we don’t really recognize how we are perceived by others.  It’s like the shock you felt when you first heard your own recorded voice and realized it didn’t sound to others like it sounds in your own head.

How do you react when you see someone unintentionally do something that is completely off-putting, counterproductive, or inflammatory?  I always wonder how the person could be so clueless — and I find it unnerving because I realize that I also could be blundering through life, deeply offending people I’m actually trying to impress or persuade.

We’d all be better off if we spent more time studying the human condition.

Facebook Giveth, And Facebook Taketh Away (II)

Facebook often seems like a double-edged sword, and a sharp one at that.

There are some people you wish you hadn’t lost touch with, but — due to laziness or disorganization or the demands of your current life — you did.  Friday night Kish and I got together with an old friend we hadn’t seen him in years and had a wonderful time.  (Thanks, Action!)  It would not have happened without Facebook; that’s where we reconnected and communicated about getting together.

But there are negatives, too.  Sometimes Facebook causes you to learn more about people than you really want to know.  Perhaps their posted political, religious, or social views deeply offend you, and then you have to decide whether the situation merits “de-friending” the person.  People really seem to struggle with that decision — and when you think about it, it’s really a new kind of social decision.

In the past you might never have learned that your co-worker or second cousin harbored beliefs that you find upsetting.  Your interactions may never have gotten beyond superficial talk about sports or TV shows.  Ignorance was bliss!  But now, thanks to their airing of views on Facebook, you know.

To be sure, in days of yore people obviously made decisions not to pursue certain friendships.  That process typically involved just avoiding the offending person and letting time and distance work their magic.  With Facebook, that approach no longer works, because exposure to those offensive views is unaffected by physical distance.

The “de-friending” process also has a formality and finality to it that old-fashioned avoidance did not.  If you were the unlucky object of an avoidance campaign, you could always rationalize that you lost touch with someone purely by happenstance and not because they can’t bear the sight of you.  With “de-friending,” however, you know for certain.  Once you were a “friend,” now you’re not — and if the list of the de-friender’s remaining friends is long, getting cut from the roster has a special sting.

People who announce de-friending decisions seem to treat the decisions as momentous ones.  I don’t blame them.  In the old days, you typically had to make public breaks only with unsuccessful boyfriends and girlfriends, and you had to cope with the hurt feelings only from those people.  Now, the “de-friended” person may be a co-worker or family member, and you’ve got to deal with the fallout from your decision in a totally different context.

Manners and etiquette developed to help people deal in an appropriate way with standardized social situations.  I won’t be surprised if the Facebook generation’s version of Emily Post comes up with the proper etiquette for handling a “de-friending” incident.

There’s a lot of social change rolled up into that one website.

Facebook Giveth, And Facebook Taketh Away

I’m sure that sociologists and psychologists are studying the impact of Facebook and will do so for years to come.  There are big effects — like the stories about so-called “Facebook divorces” — but I think the website also has altered our interactions with family, friends, and acquaintances in less noticeable, but perhaps more profound, ways.

Never before have so many people stayed in regular touch with so many other people.  Isn’t it great to have so many friends, and in such a quantifiable way!

From the perspective of those us who grew up well before Facebook was developed, however, the website seems to have produced a curious phenomenon.  We went to high school and college, moved on, and lost touch with high school and college friends.  We took initial jobs, went to grad school, or lived in a particular place, moved on, and lost touch with people we knew in those contexts.  In short, we have a past, with past friends.

If you grew up with Facebook, you may never have a past in the same sense.  Instead, you’ll just have one long present, with a constantly accumulating list of present friends.  You’ll always be in touch with that kid from eighth grade, or the woman who was on the high school newspaper with you, or that odd guy you worked with at your first job.

There is value in having a past, and leaving behind the people who remember all too well what a jerk you were in high school.  The members of the Facebook generation may never really know the relief of seeing those awkward or embarrassing past incidents recede into life’s rear view mirror.  What does it mean to always be in touch with people whose main connection is that they shared goofy behavior with you when you were a kid?  Are you less likely to really grow up, or will you at some point feel hopelessly weighted down by your long roster of friends and want to sweep the slate clean?  What will that constant, ongoing connectedness mean for the Facebook generation?

Please Don’t Record My Dreams

The BBC reports that scientists now believe they can develop a system to record people’s dreams.  Their plan is to electronically visualize brain activity and identify dream themes by mapping activity in individual brain neurons that purportedly are associated with particular individuals, objects, or concepts.  The idea seems far-fetched, and the scientists concede they are a long away from actually being able to capture dreams.  I really wish they wouldn’t try.  We’re all better off, I think, if our dreams splinter into hazy fragments and vanish from our consciousness the moment we awake.

I almost never remember my dreams; I only recall those that are so deeply disturbing that they startle me into wakefulness and survive the forgetting process that accompanies the first instant of awareness.  And when you remember your “bad dreams,” you realize that it is not only the topics of the dreams that are troubling, such as being chased by a menacing dark figure or realizing that you are late for a final exam in a class that you have blown off since the semester began months ago.  Usually the physical context is equally unsettling, like suddenly finding yourself buck naked and running down a street in some creepy part of town or sitting with a long-dead relative in a cold, dark house where the walls ooze blood and there is a screaming face visible through every dusty window.  If every dream is so weird, wouldn’t remembering them all just be psychologically traumatic?  And, in a perverse way, wouldn’t it be an embarrassing let down if the vast majority of your dreams instead turned out to be boring downloads of what you did during the day?  Who would want to relive a humdrum workday?  Maybe we instantly forget our dreams because they are so dull.

I don’t know whether dreams are attempts to communicate with us from the Great Beyond, or extrasensory perceptions of future events, or just the products of random electrical discharges in an exhausted brain that needs to wind down after a tough day — and I don’t need to know.  Just let me get some shut-eye, and leave my dream life alone.