Behavioral science > design or Behavioral science + Design?

This was originally posted on LinkedIn 7 June 2022… it struck a bit of a chord, so seemed worth reposting here.

There have been a few recent articles comparing behavioral design to human centered design that position HCD as play-acting at problem-solving, and qualitative research akin to adopting home remedies in an era of modern medicine.

So, first, a few quick things about design/qualitative research:

🌟 Characterizing it as a naïve exercise in taking what people say at face value is just inaccurate. No design researcher I know goes straight from user quote to solution, like, ever.

🌟 Rejecting it as bad science misunderstands its purpose. It’s not intended to uncover first principles or The Truth. Its value lies not in dis/proving hypotheses but in illuminating the mental models and language people use to make sense of their world, in ways surveys can't (spoiler: you can learn *a lot* from questions you didn’t think to ask!).

🌟 We know stories are powerful. Good design research plays a critical role in humanizing abstract 'segments' and 'populations' and 'consumers' for clients, converting data and Excel rows into real, live human beings whose lived experience matters.

But more importantly, I get squirrelly when pieces angle so hard to position disciplines as better-than. #HumanCenteredDesign has its flaws, without question, but #BehavioralDesign (and science more broadly) has its own limitations. And I’d argue that given our need to design for diverse people and contexts in a complex world that’s constantly evolving, design’s strengths in strategically speculating about what *could* be and system design’s ability to design at infrastructural levels promise to extend the scale and value of #BehavioralScience beyond what it's already really good at.

Ih short: integrating #Design and #BehaviouralScience has scads of potential, but doing this requires thinking of these fields as collaborators rather than rivals. A while back I wrote about how we might integrate #BehaviouralDesign's depth of knowledge about how people make decisions with strategic design’s strengths in problem framing and deep contextual inquiry. Sharing it here not because it’s perfect — both the field and my thinking have advanced in the few years since I wrote this! — but in the hope it might prompt more productive discussion about how to effectively cross-pollinate (and less about who’s better :)

Behavioral Scientist: Imagining the Next Decade of Behavioral Science

The Behavioral Scientist recently solicited perspectives on what the next decade with bring for the field. The entire post is worth a read, ranging from hopes for the future, important ethical considerations that will only become more pressing, reflections on how specific domains might be shaped—and shape the field in turn—by taking a behavioral slant, and questions about what constitutes “behavioral science” or “behavioral design” in a world where the discipline itself it still emergent. This was published as one of those perspectives:

The field of behavioral design currently resides in the hands of experts steeped in empirical behavioral studies and RCT analysis, but in the next decade it is likely to become increasingly commoditized. Much like “design thinking” is now ubiquitous, practitioners with more enthusiasm than formal training will increasingly start to practice behavioral design. This is not necessarily bad. Democratization promises to embed behavioral perspectives more broadly and organically into how we envision and develop offerings, organizations, and policy. More accessible entry paths to expertise may also reduce the perception of academic exclusivity or that a Ph.D. is a requirement for practice. 

But this also means a shift in who owns the definition of what “good” looks like, and just as with design thinking, we’ll likely see a new proliferation of get-smart-quick programs that reduce nuance and precision to more formulaic processes and the promise of instant expertise. Defining standards, like LEED certification for architecture, and codifying methods may be one way to maintain a level of consistency and quality. But when everyone’s a “behavioral thinker,” there’s a high chance it will become increasingly necessary—and important—to communicate the value of true proficiency.

What’s the future of behavioral design? A scenario planning approach

Transitions tend to prompt reflections on where we’ve been, and in looking back behavioral design has much to be proud of given that the field barely existed ten years ago! But these moments also tempt us to project what lies ahead. Behavioral design has already taken multiple forms—intriguing curiosity, presumed silver bullet, flavor of the month, competitive advantage—but it’s also a maturing discipline that is still in the process of being codified. So it’s worth pondering: what does the future hold?

Until we can predict the future, foresight methods used in design to help us explore what could be might be the next best thing. The well-known strategic tool called scenario planning, for example, pits two dimensions of uncertainty at perpendiculars to generate four speculative futures. It’s no crystal ball; rather, its value comes in forcing us to concretely imagine and play out the collision of extreme forces far in advance, to surface potential implications and provoke thinking about how we might address or adjust to them. Think of it as a low-fidelity prototype or simulation that allows us to try on different future conditions before facing the real deal.

As with any method, scenario planning is only as good as its inputs, and selecting the right dimensions of uncertainty is more likely to lead to more insightful results. When used for strategic planning purposes, external forces—like the availability of resources or market changes—are often the biggest unknowns that need to be understood. But given the nature of our query, we might want to select vectors that are more attuned to emergent uncertainties about the practice of behavioral design itself instead.

One common source of tension in maturing disciplines is the degree to which they become increasingly democratized as they becomes better known and accepted. The emergence of design thinking is a classic example, where in gaining advocacy and buy-in it also became more commoditized (for proof, just search for “design thinking certificate”). We might call this dimension of uncertainty centralization of expertise: that is, whether know-how remains rooted in formal training or traditional academia, or whether proficiency is broadly distributed and accessible through a wide range of sources.

If that dimension explores where expertise resides, a complementary axis might consider how expertise is applied to problems, or the generalizability of application. Behavioral design prides itself on delivering high-efficacy, context-specific interventions informed by well-documented experiments from the literature, but the precision of these bespoke interventions can also lead to a “see-one/solve-one“ mindset and reluctance to apply lessons from those successes to other challenges or contexts. Our second useful dimension of uncertainty, then, might be the degree to which the field sticks with specialized application or finds a way to generalize and employ findings more confidently across settings. 

Using these two intersecting dimensions—centralized to democratized expertise, against specific to generalizable applications of knowledge—we create four quadrants, each of which yields a glimpse into potential future versions of the profession.

So, what might this mean? In its current form, behavioral design is probably already closest to centralized specificity. But if this center of gravity became even more extreme, the field might grow analogous to medicine or law, solving specialized problems with high, but narrow, accuracy and requiring advanced degrees, even licensure, to practice. This would suggests a potential rise in credentialing to demonstrate credibility and doubling-down on RCTs and precisely targeted solutions; on the downside, this might also lead to a certain elitism or insularity that inhibits broader scaling.

Democratized specificity, alternately, might mean the field splinters, embedding behavioral expertise into other disciplines that apply behavioral insights to domain-relevant problems in radically different ways. This could impact how practitioners pursue mastery in a world where, for example, career paths increasingly expect practitioners to specialize in “behavioral wellness” or become a “vehicular behavioralist” rather than being behavioral generalists.

Democratized generalizability projects a path where broad familiarity with principles from behavioral science is basically taken for granted and easily accessible, akin to how —for better or worse—WebMD has allowed laypeople to self-diagnose themselves. For behavioral design, this might mean that experts relinquish some of their authority in favor of widely available tools that support DIY behavioral problem-solving… with the potential tradeoff that these less nuanced applications may read as hopelessly misguided or lightweight to those with deep expertise.

Finally, a world of centralized generalizability might retain a more traditional path to mastery but also harness pattern identification to generate potential hypotheses more effectively and efficiently. This might not only allow practitioners to more confidently apply lessons from contextually specific successes and learn from what didn’t work, but also potentially deliver on the promise of bringing behavioral design to new problem areas.

Of course, these paths are not truly mutually exclusive; medicine, for example, is founded on credentialed specialization, yet also has generalists and easy-access self-diagnostic content and tools online. But each of these potential scenarios has unique implications for how behavioral design might demonstrate its impact and staying power in the future. After all, “impact” can be interpreted and measured in multiple ways, from improving metrics for individual interventions, to expanding the footprint of behavioral practitioners, to embedding behavioral perspectives further upstream to address root causes, to scaling prior successes across new contexts. How we, and the field, choose to make a dent is largely up to us.

Research assessment as a human-centered design problem

The term “design” often conjures images of tangible stuff, such as iPhones, interiors, fashion, or kitchenware. The discipline of “human-centered design,” however, typically focuses on more intangible problems related to how people live and work, using a design lens and tools to develop new or better experiences, services, and even entire systems.

Many of these intangible problems are “ABC”—ambiguous, behavioral, complex systems—challenges. Viewed through this lens, one can make a compelling case that research assessment is, in fact, a human-centered design issue:

It’s ambiguous —Journal Impact Scores (JIF) and citations are intended to serve as proxy measures for the quality of research, but the attributes of what good looks like are diverse and sometimes indirect. From one perspective, adherence to research methodology may signal quality; from another, it may be the significance of results; from still others, it may be the novelty of looking at a known problem in a new way. Quality is also a moving target, where true impact may take years or even decades to manifest, and may be hidden in plain sight: the list of Nobel prize winners whose seminal work was initially rejected by journals (often multiple times) is lengthy.

It’s behavioral — The research process is full of metrics, rewards, and incentives, where what’s measured and rewarded has outsized impact on our choices and actions. Offering compensation for high-JIF publications or the formation of citation circles explicitly tap into these biases, but more tacit knowledge about what counts toward tenure and promotion decisions is also highly likely to sway our actions or constrain the options we even consider pursuing. This can be especially pernicious when biases reinforce or amplify inequities that exist within systems, or when prioritizing scholarly metrics limits research’s ability to productively contribute to the public good.

It’s a complex system — The world of research and research assessment—what gets funded, who collaborates with whom, and how value is generated and captured—all occur in an international, interconnected, and tangled web of entities and individuals, all of whom have their own interests and expectations. This complex system may strive for agnostic objectivity, but too often can act as a conduit to channel existing forces and assets. When control is unevenly distributed, the tendency to “feed the beast” may emerge in the form of assessment mechanisms that reward established players at the expense of more junior members.


*

As a former design strategist, I have seen firsthand how human-centered design strategies helped organizations become more innovative by directly addressing the reality that innovation activities tend to conflict with common organizational conditions—such as prioritizing easily captured, quantifiable, and short-term metrics like ROI—that support and reward the status quo. The challenge of rethinking research assessment is, in some significant ways, strikingly similar, suggesting that human-centered design strategies such as framing, mapping system flows, and designing futures might help us get purchase on this challenge as well:

1. Framing forces specificity and provides a North Star set of principles to align around through the concrete articulation of three complementary sets of perspectives:

Institutional framing focuses on top-down systems or organizational-level goals, often aiming for well-established and quantified metrics. JIF and citation scores, in fact, are exemplars—if also cautionary tales—of this type of frame. As we have seen, relying exclusively on institutional frames can be problematic when used as a proxy for quality at the expense of more meaningful captures of value, or when adherence to institutional-level goals neglects to consider important nuances and needs of those on the ground.

User framing centers the problem to be solved on latent needs of people who participate within a system. In contrast with more pragmatic institutional frames, the intent of user framing is to surface more oblique thinking about what we’re even solving for, and is typically informed by insights derived from ethnographic-style engagements with users. In a research assessment context, this might touch on more qualitative and flexible ways to gauge research quality, or prioritizing the pursuit of research that positively impacts the public good to encourage researchers to tackle personally compelling and societally meaningful problems over chasing citations.

Behavioral framing situates us in the specific behaviors that we want to shift or cultivate, with the goal of more effectively defining conditions that take the abundance of cognitive biases impacting human judgment, decision-making, and action into account. In the case of research assessment, overcoming challenges such as quantification fallacy, social proof, or anchoring requires applying a behavioral lens to a variety of participants—researchers, assessors, funders—to understand what current contextual conditions are contributing to certain behaviors and to suggest behaviorally-informed solutions.

2. Flows: Where framing forces us to define intent, mapping flows and exchanges of capital between different entities can help us understand more concretely how those systems function. Researchers, universities, funding bodies, and journals exchange both tangible and intangible assets such as money, intellectual property, prestige, and faculty appointments, yet each entity only controls or has access to a finite view into the overall network of nodes. Taking a systemic approach and tracing flows between entities can indicate where value is created but not captured, for example, and also surface critical leverage points where making small adjustments has the potential to yield outsize benefits.

3. Futures: In the same way that organizations trying to be innovative must shift from a focus on short-term metrics to longer-arc measures of progress, taking a more longitudinal view can also inform new ways of evaluating or rewarding research activities and outcomes. Twenty years ago, social media in its current form didn’t exist, let alone as a factor in research assessment or impact, but ignoring the role of open access and non-academic channels for dissemination today risks being willfully naive. Applying a futures lens also mitigates the potential of unintended consequences, in which well-meaning solutions accidentally create perverse incentives or repercussions beyond their intended scope or terrain. Design’s “futuring” tools can help us actively cultivate forward-facing perspectives on technologies and emergent social norms to help ensure that solutions have meaningful staying power and relevance.


*

Change often requires a leap of faith in addition to an investment of effort, and is difficult enough in individual organizations where performance reviews, hierarchy, and social norms can be used to incent or set conditions for behavior. It’s far more challenging in the context of a loose network or community of practice like academic science, and will require a multi-faceted approach. Human-centered design is well-positioned to supplement the ongoing activity of sharing best practices and specific, successful examples of new research assessment strategies, contributing a deep understanding what matters to individuals and entities, and a perspective on realigning incentives, social norms, and points of leverage where we might redefine and reward what’s valued in the future.

Behaviorally-led, Design-informed

For years, the “opt-out” approach to increase organ donation has been used as a classic—perhaps even tired—example of the power of nudges. The combination of defaults and status quo bias, in contrast with actively opting-in to become a donor, resulted in dramatically higher organ donor rates. Case closed!

Not so fast.

When recent headlines declared that a research paper[1] had thrown this accepted truism into question, a flurry of Internet chatter ensued. Opt-out for organ donation doesn’t work? Defaults are suspect? Against the already fraught context of the field’s replicability crisis, this new finding had the potential to rock basic assumptions of behavioral economics and nudging to its core.

On further inspection, however, the hand-wringing was unwarranted. The “nudge” to increase the number of potential organ donors through an opt-out default had actually worked just fine. But the context of going from organ donor checkbox to viable organs is more complex in reality. It turned out that organs from “opt-out” donors were far less likely to go to waiting recipients when families had the final say, or in situations where organ donation is less culturally accepted. In cases when emotionally distraught families didn’t feel that the presumed consent of a default option truly reflected their loved one’s active choice, it was easy to override: the default choice simply didn’t feel like enough of a decision.

In a September 2018 article Sarah Reid and I introduced a new model intended to define a new landscape for strategic design and behavioral science. In this model, we posited that behavioral and design problem-solving methodologies occupy a shared terrain, with each leading and supporting depending on the nature of the challenge. In situations where the solution centered on behavior change, behavioral science can take the lead while design lends a hand. In other settings, especially when creating the new or developing systemic solutions, strategic design can lead the charge, informed by behavioral insights.

Behavioral science-led, design-informed interventions often center on modifications to “choice architecture,” rethinking the user’s environment to address common behavioral tendencies and help individuals make better choices. This can take the form of adjusting how people perceive inputs and what counts as viable options, or by integrating behavioral guidance into processes to gently nudge us into “good” behavior.

Some use the metaphor of plumbing to illustrate the nature of behavioral solutions: correcting or optimizing infrastructure to help reroute convoluted flows and cut out unnecessary complexity. But installing plumbing in a house with an uneven foundation or optimizing a bathroom that's in the wrong location means only solving part of a problem, and may in fact invite other issues.

The organ donation scenario is a perfect illustration of why behavioral science benefits from design’s more personal contextual lens. Increasing the pool of potential donors is a behavioral challenge well-suited to nudges, but we don’t just need more donors: we need to increase the number of donated organs. In other words, the family’s role in agreeing to donate or withhold organs wasn’t some kind of extra detail, but a critical component of the problem to be solved. Solving for this requires understanding of the situational context to ensure we’re addressing the right problem and increase our chances of implementing a successful solution.

But that’s just a one instance. With a jump into our time machine back to 2008—the year of Nudge’s original release—let’s took at another example where behaviorally-led, design-informed approaches played a part.

The 2008 financial downturn evoked a widespread sense of customer insecurity, which in turn fed a general risk aversion to making major purchases. Car manufacturers were particularly hard-hit: the option of putting down thousands of dollars for a new car that would immediately begin to depreciate was off the table for many consumers.

Knowing that financial incentives can help overcome this hesitation, many car companies offered 0% down or deferred payments to attract potential buyers. While they may not have been explicitly informed by behavioral insights, instinctively these car companies’ offers were grounded in well-known behavioral strategies related to time discounting. No money down meant no immediate loss, and deferred payments relied on our optimism about the future. Yet sales remained flat, or even decreased… no one wanted to buy a car in such uncertain times.

Hyundai was in a worse spot than most. They had no financing arm to negotiate or provide better terms, and thus lacked the economic levers used by other companies. In spite of this deficit they hit upon an alternative: the Hyundai Assurance program.

This program recognized that the uncertain economic conditions were an important underlying cause of consumer reticence to buy, but also that the actual, concrete manifestation of that uncertainty was the vivid fear of losing one’s job. At the time, newspapers were full of stories about major layoffs and more cuts to come; given the availability heuristic and our generally poor perception of statistical probability, it’s hardly surprising that many people assumed they might be next on the chopping block, and were wary of buying a car today only to find themselves unemployed and saddled with car payments tomorrow.

Hyundai’s offer—"certainty in uncertain times"—brilliantly reset the terms of this contextual dilemma. Purchasers who made at least two payments and suffered a significant life disruption, such as involuntary job loss or personal bankruptcy if self-employed, could return the car for up to a year a for full refund and no impact on their credit score. The results were nothing short of miraculous. Hyundai sold 460,000 cars in 2009 while other manufacturers’ sales continued to stagnate, with only 350 people—less than 1 percent—ultimately using the Assurance program.

For the other 99 percent of customers, their lives played out exactly as they might have had the Assurance program never existed. But their perception of the decision was radically different due to Hyundai’s framing of the problem, which re-centered consumers’ perception of what counted as a valid choice by addressing their internal, identity-based choice architecture rather than just an external, environmentally-based choice architecture. Whereas most companies understood loss aversion, Hyundai recognized the importance of understanding the human context in this situation.

This addition of this personal and conextual lens provides a way to get more at the why (and, almost as importantly, why not) behind people’s decisions to engage. While some nudges are aimed squarely at changing behavior in a user’s best interest more transparently—increasing 401k savings is hard to argue with—many behavioral challenges require some degree of user agency or acceptance of what choices are even valid. A greater understanding of the personal and situational context—What’s my story? Who am I? With whom do I feel a sense of kinship?—can help us not only understand people’s motivations with greater clarity, but also grasp to what extent people’s options make sense in the narrative they tell themselves. The power of nudges comes from their ability to side-step human decision-making weaknesses; imbuing them with insight into people’s self-perceptions and what they value can make solutions more effective, and even help inform what is likely to work in the first place.

The implications of identity- and value-based framing show up in other types of decisions as well. Ask your average person whether they’d rather die or get a haircut and you’ll get peculiar looks: the value of one’s life will override the value of one’s hair every time. Yet for patients battling cancer with chemotherapy, this equation is different. Even when they objectively recognize that treatment is necessary to save or prolong their lives, the decision to embark on a chemotherapy regimen does not come lightly. The intensity of this decision-making can be baffling to oncologists, who might see their patient's’ hesitation to lose their locks as vain or strange in contrast with the rational path of starting appropriate treatment.

But is this really so surprising? Hair loss looms large for patients not just as an abstract form of loss aversion, but as a very concrete signal of illness, a marker that one has crossed over from being a person to a patient. It is tangible and vivid in the form of the image that stares back from the bathroom mirror, but also in the ways that others often see and treat people with significant illnesses as less than fully human.

Truly solving for loss aversion in this case requires looking beyond the theory that loss hurts more than gain feels good, or the simple equation of life > hair, to understand what real people value—and thus fear losing—in their actual context: for chemotherapy patients, hair is a proxy for much larger constructs of identity, control, and a sense of self-determination. In the same way that defaults increase the number of organ donors but only indirectly lead to more viable organs available for transplant, behavioral interventions that address medication adherence without also taking a person’s larger context, personal values, and sense of identity into account are powerful, but not sufficient. Seen through this lens, it’s easier to diagnose why some smart-on-paper interventions have gone wrong.

The 1970s introduced us to Scared Straight, a program created to re-direct teens heading toward a life of crime. The scenario went like this: a group of at-risk teens spent the afternoon at a prison, where they came face to face with incarcerated prisoners. Over the course of the visit, captured on film, the prisoners—hardened criminals and legitimately scary dudes—related their stories about where their lives of crime had led them, alternately scolding, threatening, and generally scaring the bejeesus out of these young people.

In some ways this program was ahead of its time, retrospectively applying behavioral insight at a time when the field of behavioral economics was but a glint in Kahneman and Tversky’s eye: vivid stories in place of abstract data to make a compelling case; the threat of loss aversion, in the form of a very concrete loss of freedom in prison; social proof, through direct connections between the youths’ current situations and the former lives of the prisoners. By all indications, the experience was a brutally effective cautionary tale. Interviewed on the way out, the teens were viscerally shaken, fully convinced by the need to course-correct their life trajectories. The footage was turned into an Academy Award–winning film, and variations on the program still exist.

But this wasn’t the success story it initially seemed. Followed over time, the teens involved in the program and others like it actually had worse outcomes than those with more traditional interventions. Why? Hindsight provides some clues. While the short-term effect was certainly powerful, it ignored the broader real-life context that the teens returned to. We know that present-tense experiences have outsize influence on our emotional and rational states, but they fade over time. In this case, as the afternoon dissolved into memory it was replaced by the temptations and social norms of petty crime that the teens were re-surrounded by on a daily basis. However terrifying the experience had been in the moment, it became abstract over time, and knowing you’d lived through it—That’s as bad as it gets? He wasn’t that tough—fed a tough-guy narrative that bolstered the youths’ identities as bad-asses rather than serving as a deterrent.

Let’s end on a success story: solving the stigma of free or subsidized lunches. For middle- and high-schoolers, qualifying for a free lunch can be social kryptonite. Given the choice to receive a free lunch or skip it altogether, the answer is obvious: better to go without food than to suffer the stigma needing it in the first place. Some schools made this choice easier in the wrong way, exacerbating the problem by creating different lines and using alternate payment processes for subsidized lunches that only called attention to the disparity.

From an economic point of view, the choice is still an easy one, but with a different answer: Who turns down free food? Even from a behavioral standpoint, adjustments to choice architecture can only do so much. Smarter placement and framing of options might help move the needle slightly—some New York schools went so far as to invite suited-up professional athletes from local teams to eat the subsidized lunches in an effort to make the option cooler—but the super-power strength of teenage in- and out-groups is far too strong. It’s a behavioral problem, but not one that can be solved through behavioral interventions alone.

Many large urban cities such as Chicago, Boston, and Dallas solved the problem not by adjusting the choice architecture of one half of a two-tiered system but by going after the bigger issue of stigma, leveraging federal programs to offer free school lunch for everyone. Not only does this benefit more kids, but it solved the problem at the root of the issue—the uneven foundation—rather than just fixing the broken plumbing.

In all these examples, the challenge was behavioral at its core, but required an understanding of the complexities of contextual uncertainty, identity and perception, or “construal,” to solve successfully. Where behavioral science contributes a critically important perspective on common cognitive processing errors, fields like design and sociology can help to uncover what feeds and flavors people’s perceptions of choice, contributing a “here and now” sense of user insight that helps to clarify what people value and, thus, what they fear losing most. As these stories indicate, we ignore this at our own peril.

[1] Yiling Lin, Magda Osman, Adam J. L. Harris, Daniel Read. Underlying wishes and nudged choices. Journal of Experimental Psychology: Applied, 2018

How Behavioral Design Can Help De-Risk Innovation

From Innovation Excellence, August 2017

Everyone who works in innovation will regularly hear clients ask how to “de-risk” innovation. In a world where consumers have more choice than ever before, how can we provide an increased sense of confidence about the solutions we deliver?

While no solution is ever a sure thing, behavioral economics can help. It is increasingly used by companies across industries—from health care and financial services, to the public sector and consumer goods—to fortify solutions based on an understanding of consumers’ cognitive and behavioral biases. In a nutshell, these insights help us understand what people are likely to actually do, not what they say they’ll do.

In the user-centered design world, combining principles from behavioral economics with user research and a client’s business context allows us to more systematically de-risk innovation solutions. At Doblin, we call this Behavioral Design, and it’s founded on the following key principles:

1. Insights into humans’ “irrational” decision-making can strengthen the design of smart-on-paper solutions.

How often have you bailed from a website simply because you don’t want to create and remember yet another username and password? Or purchased an expensive item that sits in a closet unused because the idea of throwing it out feels wasteful? Or taken a walk around the block just to push that step-counter from 9,798 to 10,000?

Behavioral design allows us to incorporate our knowledge of these kinds of “irrational” behaviors into solutions, designing for real people and their behavioral tendencies rather than perfect versions of ourselves.

2. Behavioral insights are grounded in quantitative and experimentally tested findings.

User-centered design embraces surprising insights, generative open-ended research, and synthetic thinking… yet the very attributes that make design thinking such a powerful tool for innovation are often hard to quantify. As a result, businesses often hunger for quantitative validation to assure them of an innovation concept’s viability.

In contrast, the roots of behavioral economics grew from a discipline of research experiments and quantifiable results. While behavioral tendencies are exactly that—not a rule of law, but an increased likelihood—we can leverage this quantitative foundation to foster a greater degree of assurance and confidence about the value of interventions early on.

3. Knowledge of behavioral biases can supplement—not replace—“here and now” user research.

Piyush Tantia’s article in SSIR thoughtfully suggested that Behavioral Design is the new Human-Centered Design. But what if it’s not an either/or equation? What if, instead, we borrow the best of both worlds: using qualitative end-user research to understand people’s latent needs in the context of their lives, and behavioral design to inform solutions grounded in human behavioral biases? Exclusively focusing on user needs runs the risk of ignoring important cognitive biases, but only considering behavioral interventions neglects important inputs like someone’s sense of identity, what they value, and their incoming experiences… all of which inform behavior.

This combination is powerful because our behaviors and choices are shaped by both our inherent human tendencies and the world we live in. Each supplies a part of the picture. Here in 2017, many of us have smartphones, subscribe to Netflix, and use Venmo. These things did not exist in 1967, and the activities they support will likely take radically different forms in the future, but our human tendencies to dislike loss more than gain (loss aversion) or over-invest in things we own (endowment effect) are forever.

4. Innovative solutions almost always demand behavior change, whether initial uptake or ongoing adoption.

History is littered with examples of “can’t lose” solutions that lost out to alternatives or just never got traction. Why is this the case?

It can partly be explained by the fact that designing for behavioral change is often inherent in innovation, but is a notoriously difficult problem to solve. We rarely make choices in a vacuum: New solutions have uncertain value, and must often overcome our investment in older technologies or solutions (as anyone who replaced a record collection with CDs might recall). Intentionally building on known behaviors or habits can only help. For example, car sharing companies have extended and expanded the existing mental model of car rental, while Warby Parker shifts the familiar activity of eyeglass frame shopping from store to home.

To a certain degree, innovation remains a test of faith—smart solutions are no guarantee of success. But applying behavioral design thoughtfully can go a long way toward de-risking those good ideas.

Why behavioral design?

One day, when I was a kid, I had a philosophical argument with my brother. I had a math test coming up, and I was grumbling about studying for it – nothing too controversial there. His stance, however, was that not only would I study, go to school, and take the test as planned, but I would do so because in my heart of hearts that’s what I wanted to do. He outrageously extrapolated this to make a general prediction: everyone always did what they really wanted to do, even if the choices they made often seemed, frankly, fundamentally unpleasant.

In my mind, this was the height of stupidity. Obviously I didn’t really want to go to school for an exam more than I wanted to hang out with my friends, or stay at home and watch MTV (this was the 80s, after all). Surely my parents didn’t want to go to work more than they wanted a day off, either—and they were grown-ups, far more rational than I. Examples that refuted his point abounded.

Or did they?

When forced to follow my choices through to their logical conclusion, I was somewhat dismayed to admit that the consequences of not studying or taking the test would, in the long run, cause more trouble than they were worth. I was too rule-based even as a kid to consider skipping out altogether and getting a zero, so if I tried to postpone the inevitable I would have to come up with some credible story for my math teacher in order to schedule a make-up exam. On to option 2: if I didn’t study, but still showed up for the exam, the likely outcome of getting a low score would not only be a horrible blow to my ego – I was good at math! – but it would also bring my grade down significantly and could end up landing me in a different class than the one my friends were in: this was a social risk I was not willing to take. Speaking of friends… actually telling them that I had bailed while they had all stepped up to the plate was alone more than I wanted to face. And perhaps worst of all, informing my science-minded parents that I had decided not to avail myself of the joys of trigonometry was mortifying.

Clearly the deck was stacked: I would study, and I would go to school to take the exam, if for no other reason than that I either feared the consequences not doing so, or found the alternate to be such a hassle that it wasn’t worth it. As much as I was loathe to admit it, was there some truth to my brother’s theory?

*

Economic theory has traditionally assumed that individuals make wholly rational financial decisions, using all the necessary information they had at their disposal to achieve ends in their own best interest. The field of behavioral economics recognized that this was not typically the case—and, in fact, rarely was— and grew out of a desire to provide additional cognitive and psychological insight into the process of decision-making to better understand both how people interpreted their options and how those perceptions affected the choices they made.

In recent years, behavioral economics has gained a fair amount of publicity and traction as a strategy for developing policy solutions in which the unpredictable nature of human behavior plays a significant role. When we begin to think of ‘economics’ as a more complex system of exchange that extends beyond finance, we can not only start to see how and why people make less rational—but perhaps more ‘human’—choices, but also why it may be valuable to apply theories derived from behavioral economics to design.

The concept of 'value’ itself casts a long shadow in economics history as a monetary measure, but in everyday use we realize that, in fact, it has much broader applications: consider the very real value of time, convenience, social currency, or experience. We pay people to clean our houses because we value tidy homes and clean laundry… but we value saving the time and effort it we would have spent to do it ourselves even more, and are willing to exchange money to avoid the chore and spend that time doing other things. Abstract ideas of 'gain', 'loss', ‘risk’, and 'investment' have similar wiggle room; while these terms make perfectly fine economic sense, they can also apply more broadly in more human, non-quantitative ways.

Consider my brother’s philosophical position, for example. Clearly there was little financial risk at stake in my decision to take the exam or face the consequences, but my perceptions of risk, loss aversion, and a weighing of current and future needs were embedded in nearly every aspect of my decision-making even if I didn’t realize it at the time. Fundamentally, the decisions we make are often inextricable from how we see ourselves; my hesitance to blow off that exam was tied as much to my perceived identity as someone who did well in school and strong kinship ties to my peer group than to any traditional financial benefit. This sense of identity is deeply tied to the things we value, both tangible and intangible.

Current applications of behavioral design way to exciting new strategies and tools that can help behavioral scientists and designers understand the choices users make and create better approaches to those challenges. Through a deeper understanding of the ways in which people problem-solve and make decisions, and focusing on key known tendencies—such as the fact that humans respond to vivid, anecdotal, and easily available information, have a difficult time balancing present-tense needs with future ones, and have deeply-seated needs to moderate their behavior based on their sense of relevant personal identities and peer influences—we can design solutions that leverage and/or counter common cognitive biases more effectively. We can start to incorporate behavioral economics into research, analysis, and ideation by considering factors such a:

  • Consider how concepts of gain, loss, risk, and uncertainty apply to decision-making

  • Recognize ways in which people exchange specific kinds of value, like time and money

  • Understand how emotional and social pressures can act as strong behavioral drivers

  • Provide feedback from the right source, at the right time and level, for maximum effect

  • Explore how self-control strategies work—or don't—in order to improve them

  • Decouple aspects of value, cost, and use to create new kinds of business models

But extending the design field by folding behavioral economics strategies into existing methodologies is just a start. Much in the same way that ethnographic research reached across domains to inform user-centered research, eventually becoming an integral component of the discipline, behavioral economics has the potential to radically shift how we think about and solve for humanity-based challenges. By incorporating generative, user-centered research into behavioral science we can contribute a critical lens of user context while retaining the sense of quantitative legitimacy that empirical studies excel at. Establishing leading and lagging metrics for measuring success can help design lead when behavioral solutions are too future-oriented or systems-driven to test with randomized control trials. And creating diagnostic and analytic behavioral-economics-informed tools for problem-solving can serve as the basis for more systematic and satisfying applications of design thinking to situations that are ambiguous or otherwise difficult to wrangle. Clearly, there is no shortage of directions to pursue; the problem is not in identifying how behavioral economics might inform strategic design thinking and vice versa, but figuring out which approaches to tackle first.