Reflexiones compartidas para el Laboratorio de Diseño Integral de Sistemas de la Información. UAM Santa Fé
Hace tiempo que me tope con el libro de Ellen y Julia Lupton, Diseña tu vida, en el conocí la pregunta “Qué te permite”. A través de una graciosa lectura, viví las primeras descripciones sobre nuestras interacciones con objetos de la vida diaria desde ese punto de vista nuevo. No se trata de usar el objeto como esta diseñado para ser usado, si no ver al objeto como ese medio que me permite hacer tantas otras cosas.
El encuentro en mi caso, con las posibilidades del diseño, han estado mucho más tiempo relacionadas con necesidades comerciales y eso me ha enseñado que vender esta lleno de posibilidades para el diseño. Afortunadamente el mundo está girando a gran velocidad y ustedes ahora cuentan con otros recursos, ahora cuentan con más experiencias de las que podrán capitalizar.
Con ese fin les comparto una selección de episodios de un programa que escucho con mucha frecuencia y hace ya años. Se llama 99% Invisible. Visítenlo y sobre todo, les recomiendo escuchen la selección que he hecho para ustedes. Estoy seguro que encontrarán ideas, principios y puntos de vista sobre temas que ya conocen y otros de los que no se imaginaban. Agregue un par de vínculos adicionales pero la verdad es que cada Podcast y sus ligas, ya contienen suficientes referencias para que ustedes tengan una buena inmersión en el.
Hay tres episodios que me parecieron los más relevantes.
This manifesto was first published in 1999 in Emigre 51.
We, the undersigned, are graphic designers, art directors and visual communicators who have been raised in a world in which the techniques and apparatus of advertising have persistently been presented to us as the most lucrative, effective and desirable use of our talents. Many design teachers and mentors promote this belief; the market rewards it; a tide of books and publications reinforces it.
Encouraged in this direction, designers then apply their skill and imagination to sell dog biscuits, designer coffee, diamonds, detergents, hair gel, cigarettes, credit cards, sneakers, butt toners, light beer and heavy-duty recreational vehicles. Commercial work has always paid the bills, but many graphic designers have now let it become, in large measure, what graphic designers do. This, in turn, is how the world perceives design. The profession’s time and energy is used up manufacturing demand for things that are inessential at best.
Many of us have grown increasingly uncomfortable with this view of design. Designers who devote their efforts primarily to advertising, marketing and brand development are supporting, and implicitly endorsing, a mental environment so saturated with commercial messages that it is changing the very way citizen-consumers speak, think, feel, respond and interact. To some extent we are all helping draft a reductive and immeasurably harmful code of public discourse.
There are pursuits more worthy of our problem-solving skills. Unprecedented environmental, social and cultural crises demand our attention. Many cultural interventions, social marketing campaigns, books, magazines, exhibitions, educational tools, television programs, films, charitable causes and other information design projects urgently require our expertise and help.
We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication – a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design.
In 1964, 22 visual communicators signed the original call for our skills to be put to worthwhile use. With the explosive growth of global commercial culture, their message has only grown more urgent. Today, we renew their manifesto in expectation that no more decades will pass before it is taken to heart.
Sheila Levrant de Bretteville
Linda van Deursen
J. Abbott Miller
Jan van Toorn
I may add myself to this. Federico Hernandez-Ruiz
Here’s the link to the original post: http://www.emigre.com/Editorial.php?sect=1&id=14
And a copy of the 164 manifesto written by Ken Garland along with 20 other artists.
Since 2005 startup accelerators have provided cohorts of startups with mentoring, pitch practice and product focus. However, accelerator Demo Days are a combination of the graduation ceremony and pitch contest, with the uncomfortable feel of a swimsuit competition. Other than “I’ll know it when I see it”, there’s no formal way for an investor attending Demo Day to assess project maturity or quantify risks. Other than measuring engineering progress, there’s no standard language to communicate progress.
Corporations running internal incubators face many of the same selection issues as startup investors, plus they must grapple with the issues of integrating new ideas into existing P&L-driven functions or business units.
What’s been missing for everyone is:
a common language for investors to communicate objectives to startups
a language corporate innovation groups can use to communicate to business units and finance
data that investors, accelerators and incubators can use to inform selection
While it doesn’t eliminate great investor judgment, pattern recognition skills and mentoring, we’ve developed an Investment Readiness Level tool that fills in these missing pieces.
Investment Readiness Level (IRL) for Corporations and Investors
The startups in our Lean LaunchPad classes and the NSF I-Corps incubator useLaunchPad Central to collect a continuous stream of data across all the teams. Over 10 weeks each team gets out of the building talking to 100 customers to test their hypotheses across all 9 boxes in the business model canvas.
We track each team’s progress as they test their business model hypotheses. We collect the complete narrative of what they discovered talking to customers as well as aggregate interviews, hypotheses to test, invalidated hypotheses and mentor and instructor engagements. This data gives innovation managers and investors a feel for theevidence and trajectory of the cohort as a whole and a top-level view of each teams progress. The software rolls all the data into an Investment Readiness Level score.
(Take a quick read of the post on the Investment Readiness Level – it’s short. Or watch the video here.)
The Power of the Investment Readiness Level: Different Metrics for Different Industry Segments Recently we ran a Lean LaunchPad for Life Sciences class with 26 teams of clinicians and researchers at UCSF. The teams developed businesses in 4 different areas– therapeutics, diagnostics, medical devices and digital health. To understand the power of this tool, look at how the VC overseeing each market segment modified the Investment Readiness Level so that it reflected metrics relevant to their particular industry.
Medical Devices Allan May of Life Science Angels modified the standard Investment Readiness Level to include metrics that were specific for medical device startups. These included; identification of a compelling clinical need, large enough market, intellectual property, regulatory issues, and reimbursement, and whether there was a plausible exit.
In the pictures below, note that all the thermometers are visual proxies for the more detailed evaluation criteria that lie behind them.
Therapeutics Karl Handelsman of CMEA Capital modified the standard Investment Readiness Level (IRL) for teams developing therapeutics to include identifying clinical problems, and agreeing on a timeline to pre-clinical and clinical data, cost and value of data points, what quality data to deliver to a company, and building a Key Opinion Leader (KOL) network. The heart of the therapeutics IRL also required “Proof of relevance” – was there a path to revenues fully articulated, an operational plan defined. Finally, did the team understand the key therapeutic liabilities, have data proving on-target activity and evidence of a therapeutic effect.
Digital Health For teams developing Digital Health solutions, Abhas Gupta of MDV noted that the Investment Readiness Level was closest to the standard web/mobile/cloud model with the addition of reimbursement and technical validation.
Diagnostics Todd Morrill wanted teams developing Diagnostics to have a reimbursement strategy fully documented, the necessary IP in place, regulation and technical validation (clinical trial) regime understood and described and the cost structure and financing needs well documented.
For their final presentations, each team explained how they tested and validated their business model (value proposition, customer segment, channel, customer relationships, revenue, costs, activities, resources and partners.) But they also scored themselves using the Investment Readiness Level criteria for their market. After the teams reported the results of their self-evaluation, the VC’s then told them how they actually scored. We were fascinated to see that the team scores and the VC scores were almost the same.
The Investment Readiness Level provides a “how are we doing” set of metrics
It also creates a common language and metrics that investors, corporate innovation groups and entrepreneurs can share
It’s flexible enough to be modified for industry-specific business models
It’s part of a much larger suite of tools for those who manage corporate innovation, accelerators and incubators
P.S. if you want to learn more abut the IRL and other tools, we teach a 2-day class for corporate innovation, accelerators and incubators. Info here
We know that consumer purchase decisions are often made quickly and subconsciously, but there are opportunities where it’s possible to influence a consumer’s perception of a brand. People often make buying decisions by using all five of their senses and once product designers discover what each of these sensory influencers are, they can develop packaging that strategically speaks to consumers at each stage of the decision-making process. It’s ultimately about designing a complete experience–one that supports the brand every step of the way.
At my company, we developed the 4sight Sensory Lab, pictured above, to uncover these answers. Here, for example, cold beverage drinkers known to prefer their drinks not simply cold, but chilled to the perfect temperature, are taken through a progression of exercises that mimics the various points of contact that consumers have with a product.
We identify which bottle shape, size, color, material, and texture promises that sense of cold refreshment at first glance. As the test subjects move closer, details such as condensation and frost become evident and when they are handed several bottles, each chilled to the exact same temperature–but made of different materials, textures, shapes and finishes–they provide feedback on which one feels like just the right cold.
In the Sensory Lab, our process helps us ensure that at each stage of interaction with a brand, consumers receive the right information, enabling them to see, feel, hear, smell, and taste the value of the product. Here, we’ve identified the six stages that lead to a first purchase or a repeat purchase:
THE FIRST GLANCE
This is the first impression at a distance, seeing the product in someone else’s hand, on the shelf, or across the room. It’s the first visual promise of what a product will do for your senses. For Pom 100% Pomegranate Juice, the distinctive profile of the bottle featuring those fully rounded spheres, allows the distinct dark red color of the juice to catch the attention of a shopper. It promises a bold, robust taste. A new entry into the tequila segment, SX Tequila chose a distinctive, curvaceous bottle with smooth lines and frosted texture to communicate the sense of a smooth-tasting, chilled beverage.
Here, consumers take a closer look and this is where details begin to hint at tactile sensations. Flowing details etched into the structure of the Aquafina water bottle strongly suggest the refreshment that the product provides.
Orangina, meanwhile, promises its fresh orange flavor through a dimpled finish on the bottle that suggests you are consuming straight from an actual orange.
THE PHYSICAL INTERACTION
Next, consumers make that first physical contact and combine the visual with the tactile experience. When grasped, the gentle curvature of the Febreze bottle and the angled spray head convey the soft and pleasant aroma that will fill the air. The smooth, diagonal neck on the new Miller Lite Bottle promises a refreshing flow of beer while the bold taper from the neck to the body provides a strong and confident grip for the hand. Adding the texture of the hops etched in the glass provides further engagement.
When the consumer makes a physical step towards consumption or use of the product, there’s another opportunity to solidify your brand’s perception. When the foil cover is peeled off of a can of San Pellegrino, it offers the sensation of actually peeling fruit. It also incorporates a crinkling sound, which adds to the sensory experience at opening.
CONSUMPTION OR USAGE
The point at which the product is consumed or used and here, all five senses can be at play.
A smooth metal tip on Clinique’s Even Better Eyes product provides a refreshing and reviving cold sensation on the skin. For Gerber Good Start, the designated scoop holder on the side of the container provides for a clean usage experience and preserves the product for future consumption, as fingers do not contaminate the powder.
There’s another opportunity to create a pleasant user experience when the product is disposed of or put away for later use. Wrigley 5 Gum incorporates a lock feature and embossed details to convey a secure and clean resealable pack. The Oreo cookie package also utilizes the sense of sight with a resealable film to promise lasting freshness. Once the film is replaced after each usage, it recreates the look of a fresh, unopened package.
In The Sensory Lab, we’ve gleaned significant insight into how the five senses influence consumer decision-making at six pivotal points. Incorporating a similar approach in your design process will help insure your package effectively communicates key brand attributes at each and every point of influence.
[Image: Shopping via Shutterstock]
Here is the link to the original article: http://www.fastcodesign.com/3024657/6-tips-for-making-a-powerful-first-impression?partner=newsletter
Know When to Use Top-Down and When to Use Bottom-Up Approaches
Market research is grounded in the branch of philosophy known as logic. Two logical reasoning approaches are basic to research design. These approaches are known as deduction and induction.
Deductive reasoning is a top-down approach that works from thegeneral to the specific. In empirical research, that means that a market researcher begins a study by considering theories that have been developed in conjunction with a topic of interest. This approach lets a market researcher think about research that has been already been conducted and develop an idea about extending or adding to that theoretical foundation. From the topical idea, the market researcher works to develop an hypothesis. This new hypothesis will be tested by the market researcher in the process of conducting a new study. Specific data that has been collected and analyzed in the new study will form the basis of the test of the hypothesis. The specific data will either confirm the hypothesis, or it will not. [It is important to note that an hypothesis that is not confirmed has not been proven false.]
Deductive Research Steps
GENERAL – Literature Search & Theories
Topic of Interest
Data Analysis & Hypothesis Testing
Confirm the Hypothesis or Not
Inductive reasoning is a bottom-up approach that moves from the specific to the general. In this case, specific refers to an observation made by the market researcher that eventually leads to broad generalization and theory. [It might be important to note – for discussions with colleagues or in public – that the term is bottom-up and not bottoms-up. Bottoms-up is a sort of toast for drinking, something that may seem entirely appropriate once the research study is completed.]
An inductive research methods approach begins with specific observations made by a market researcher who begins a study with an idea or a topic of interest, just as in a deductive approach to research. However, in an inductive approach, the researcher does not consider related theories until much further along into the research. From the observations or measures that the market researcher conducts – generally in the field and not in a laboratory setting – clusters of data or patterns begin to emerge. From these regularities or patterns, the market researcher generates themes that come analysis of the data.
Inductive Research Steps
SPECIFIC – Observations & Measures
Topic of Interest
Data Clusters or Patterns
Quantitative Research and the Hypothesis
If the market researcher is conducting quantitative research, at this point, theories can be considered. However, if the market researcher is conducting qualitative research, then the formal hypothesis testing does not take place. Rather, the market researcher may formgeneralizations based on the strength of the data and themes that have emerged.
Data collection and data analysis in qualitative research is iterative. That is to say, it data collection doesn’t happen all at once and then — as though the market researcher has thrown a switch — data analysis begins. Rather, some data is collected, which is considered by the researcher, and then some more data is collected and considered, and so on. At a certain point, when sufficient data clusters or patterns have emerged, the market researcher will decide that thedata collection can slow, stop, or change direction.
Data collection and data analysis in quantitative research are distinct stages. To mingle data collection and data analysis in the manner of qualitative research would compromise the integrity of the data. Some scientist would say that a lack of boundaries in the data collection and data analysis processes causes the data to become contaminated and the research to lack rigor. Findings from such compromised research would not be viewed as robust.
Causal Inquiry, Exploratory Inquiry, and Everything In-Between
Bottom-up research methods feel more unstructured, but they are no less scientific than structured top-down research methods. Because each type of research approach has its own advantages and disadvantages, it is not uncommon for a study to employ mixed methods. A market researcher who uses mixed methods applies a deductive research approach to the components of the study that shows strong theoretical ties. Alternately, an inductive research approach is applied to the components of the study that seem to require a more exploratory inquiry.
Its a misrepresentation to form a mental picture of deductive approaches and inductive approaches as two sides of the same coin. In practice, they are two ends of a continuum. Deductive research is associated with linearity and a search for causal relationships. Inductive research is associated with depth of inquiry and descriptions about phenomena. Mixed methods can be placed at about mid-point on that continuum with an emphasis on research breadth.
This article contains a much simplified explanation about the different types of deduction and inquiry. There are many layers to market research. The content in this article just begins to scratch the surface. For instance, if we consider the philosophical grounding of deductive and inductive reasoning, we might refer to the approaches as positivistic and naturalistic.
Keep up with the site by signing up for my free newsletter!
In science, there are two ways of arriving at a conclusion: deductive reasoning and inductive reasoning.
Deductive reasoning happens when a researcher works from the more general information to the more specific. Sometimes this is called the “top-down” approach because the researcher starts at the top with a very broad spectrum of information and they work their way down to a specific conclusion. For instance, a researcher might begin with a theory about his or her topic of interest. From there, he or she would narrow that down into more specific hypotheses that can be tested. The hypotheses are then narrowed down even further when observations are collected to test the hypotheses. This ultimately leads the researcher to be able to test the hypotheses with specific data, leading to a confirmation (or not) of the original theory and arriving at a conclusion.
An example of deductive reasoning can be seen in this set of statements: Every day, I leave for work in my car at eight o’clock. Every day, the drive to work takes 45 minutes I arrive to work on time. Therefore, if I leave for work at eight o’clock today, I will be on time.
The deductive statement above is a perfect logical statement, but it does rely on the initial premise being correct. Perhaps today there is construction on the way to work and you will end up being late. This is why any hypothesis can never be completely proved, because there is always the possibility for the initial premise to be wrong.
Inductive reasoning works the opposite way, moving from specific observations to broader generalizations and theories. This is sometimes called a “bottom up” approach. The researcher begins with specific observations and measures, begins to then detect patterns and regularities, formulate some tentative hypotheses to explore, and finally ends up developing some general conclusions or theories.
An example of inductive reasoning can be seen in this set of statements: Today, I left for work at eight o’clock and I arrived on time. Therefore, every day that I leave the house at eight o’clock, I will arrive to work on time.
While inductive reasoning is commonly used in science, it is not always logically valid because it is not always accurate to assume that a general principle is correct. In the example above, perhaps ‘today’ is a weekend with less traffic, so if you left the house at eight o’clock on a Monday, it would take longer and you would be late for work. It is illogical to assume an entire premise just because one specific data set seems to suggest it.
By nature, inductive reasoning is more open-ended and exploratory, especially during the early stages. Deductive reasoning is more narrow and is generally used to test or confirm hypotheses. Most social research, however, involves both inductive and deductive reasoning throughout the research process. The scientific norm of logical reasoning provides a two-way bridge between theory and research. In practice, this typically involves alternating between deduction and induction.
A good example of this is the classic work of Emile Durkheim on suicide. When Durkheim pored over tables of official statistics on suicide rates in different areas, he noticed that Protestant countries consistently had higher suicide rates than Catholic ones. His initial observations led him to inductively create a theory of religion, social integration, anomie, and suicide. His theoretical interpretations in turn led him to deductively create more hypotheses and collect more observations.
Babbie, E. (2001). The Practice of Social Research: 9th Edition. Belmont, CA: Wadsworth Thomson.
If you were forced to rely on only two target audiences to guide all your future design work, I’d strongly recommend using astronauts and toddlers. Fortunately, the connection between them goes beyond the design of their underwear to the nature of perception and expertise, and in what we treat as valid data, and what we choose to ignore as “noise”–the extraneous details, out-of-category input, the anecdotal tidbits. As it turns out, noise is much more valuable for useful design insights than you might think.
First, the astronauts. One little-known quirk of the Apollo moon landings was the difficulty the astronauts had judging distances on the Moon. The most dramatic example of this problem occurred in 1971 during Apollo 14, when Alan Shepard and Edgar Mitchell were tasked with examining the 1,000-foot-wide Cone Crater after landing their spacecraft less than a mile away. After a long, exhausting uphill walk in their awkward space suits, they just couldn’t identify the rim of the crater. Finally, perplexed, frustrated, and with the oxygen levels in their suits running low, they were forced to turn back. Forty years later, high-resolution images from new lunar satellites showed they had indeed come close–the trail of their footprints, still perfectly preserved in the soil, stop less than 100 feet from the rim of the crater. A huge, 1,000-foot-wide crater, and they couldn’t tell they were practically right on top of it. Why?
It should have been easy for them, right? These guys were trained as Navy test pilots; landing jets on aircraft carriers requires some expertise in distance judgment. They also had detailed plans and maps for their mission and had the support of an entire team of engineers on Earth. But their expertise was actually part of the core problem. The data their minds were trying to process was too good. All of the “noise” essential to creating the patterns their minds needed to process the data accurately was missing. And patterns are the key to human perception, especially for experts.
Consider everything that was missing up there. First, there’s no air on the Moon, so there’s no atmospheric haze, either. Eyes that grew up on Earth expect more distant objects to appear lighter in color and have softer edges than closer things. Yet everything on the Moon looks tack-sharp, regardless of distance. Second, the lack of trees, telephone poles, and other familiar objects left no reference points for comparison. Third, since the Moon is much smaller than the Earth, the horizon is closer, thus ruining another reliable benchmark. Finally, the odd combination of harsh, brilliant sunshine with a pitch-black sky created cognitive dissonance, causing the brain to doubt the validity of everything it saw.
Ironically, that kind of truthful, distortion-free data is usually what experience designers want to have as input for their decision-making, no matter what they’re trying to do. We tend to believe that complex systems are the tidy, linear sum of the individual variables that create them. But despite the pristine environment of the Moon, the Apollo astronauts were repeatedly baffled when it came to simple distance and size perceptions, even after each team came back from the Moon and told the next team to be aware of it.
Meanwhile, the toddlers I mentioned earlier provide a corresponding example of the power of patterns in perception. When my first child was about 4, we came across a wonderful series of picture books called Look-Alikes, created by the late Joan Steiner. Each book has a collection of staged photographs of miniature everyday scenes like railway stations, city skylines, and amusement parks created entirely from common, found objects (see some examples here). Without any special adornment, a drink thermos masquerades as a locomotive, scissors become a ferris wheel, and even a hand grenade makes for a very convincing pot-belly stove. The entire game is to un-see the familiarity of the scene, and identify all the common objects ludicrously pretending to be something other than what they are. There’s no trick photography involved, but you can look at each picture for hours and not “see” everything that’s right there in front of you. You know it’s a trick, but you keep falling for it over and over.
The really amazing part is that the toddler, a true novice with only a few years’ experience in seeing, completely understands the scenes she’s looking at, even though every individual piece of “data” she’s looking at is a deliberate lie. Yet the pattern of data that creates the scene is “perfect.” We already know what those scenes are supposed to look like before we even see the book’s version of them, so we unconsciously project that pattern onto what we’re looking at, even to the point of constantly rejecting the contrary data our eyes are showing us. There is in fact no amusement park in the photograph I called an amusement park. But I see it anyway.
In data-processing parlance, the signal-to-noise ratio of the moonscape was perfect (actually, infinitely high), and zero for Look-Alikes pages (the whole joke is that there really was no signal there in the first place). Yet a toddler can read the noisy scene perfectly, and the seasoned test pilots were baffled by the noiseless scene. How can this be?
The lesson is that patterns drive perception more so than the integrity of the data that create the patterns. We perceive our way through life; we don’t think our way through it. Thinking is what we do after we realize that our perception has failed us somehow. But because pattern recognition is so powerfully efficient, it’s our default state. The thinking part? Not so much.
This just might be why online grocery shopping has yet to really take off. The average large U.S. supermarket offers about 50,000 SKUs, yet a weekly grocery shopper can easily get a complete trip done in about 30 minutes. We certainly don’t feel like we’re making 50,000 yes/no decisions to make that trip, but in effect we actually do. Put that same huge selection online, and all of those decisions are indeed conscious. Even though grocery shopping is a repetitive, list-based task, the in-store noise of all those products that aren’t on your list give you essential cues to finding the ones that are, and in reminding you of those that were not on your list but you still need. That’s even before you get to the detail level, where all the other sensory cues tell you which bunch of bananas is just right for you. So despite all the extra effort and hassle involved in going to the store in person, it still works better because of, not in spite of, the patterns of extraneous noise you have to process to get the job done.
To account for the role of noise within the essential skill of pattern recognition, we need to remind ourselves how complex seemingly simple tasks really are. Visually reading a scene, whether it’s a moonscape, a children’s book illustration, a grocery store, or a redesigned website, is an inherently complex task. Whenever people are faced with complexity (i.e., all day, every day), they use pattern recognition to identify, decipher, and understand what’s going on instantly, instead of examining each component individually. The catch is that all of the valuable consumer thought processes we want to address–understanding, passion, persuasion, the decision to act–are complex.
However, the research we use to help us design for these situations usually tries to dismantle this complexity. It also assumes a user who is actually paying attention, undistracted, in a clean and quiet environment (such as a market research facility), and cares deeply about the topic. Then we “clean” the data we collect, in an attempt to remove the noise. And getting rid of noise destroys the patterns that enable people to navigate those complex functions. So we wind up relying on an approach that does a poor job of modeling the system we’re trying to influence.
The challenge is to overcome the seemingly paradoxical notion that paying attention to factors completely outside our topic of interest actually improves our understanding of that topic. Doing so requires acknowledging that our target audience may not care as much about something as we do, even if that topic represents our entire livelihood. It requires a broader definition of the boundaries of what that topic is, and including the often chaotic context that surrounds it in the real world. It also requires a more than casual comfort level with ambiguity: Truly understanding complex systems involves recognizing how unpredictable, and often counterintuitive, they really are.
This is why ethnographic research is so popular with all kinds of designers. The rich context ethnographies offer is full of useful noise; the improvising people do to actually use a product, the ancillary details that surround it, and the unexpected motivations a consumer might bring to its use. These are all easier to access via a qualitative, on-location approach than they are via a set of quantitative crosstabs or sitting behind a mirror watching a focus group. It’s also a powerful human-to-human interface, in which the designer uses his innate pattern-recognition capability to analyze patterns in user behavior.
What often gets overlooked is the role noise can and should play in quantitative research. Most designers’ avoid quantitative research because of the clinically dry nature of the charts it produces, and the often false sense of authority that statistically projectable data can wield. However, only quantitative research can reveal the kind of perceptual patterns that are invisible to qualitative methods, and the results needn’t be dry at all. The solution is to appropriately introduce the right kind of noise to quantitative research, to deliberately drop in the necessary telephone poles, trees, and haze that allows those higher-level perceptual patterns to be seen and interpreted.
How audio dithering works.
Fortunately, there’s already a model for this. When analog music is digitally recorded, some of the higher highs and lower lows are lost in the conversion. Through a process called dithering, audio engineers can add randomized audio noise to the digital signal. Strangely enough, even though the added noise has nothing to do with the original music, adding it actually improves the perceived quality of the digital audio file. The noise fills in the gaps left by the analog-to-digital conversion, essentially tricking your ear into hearing a more natural-sounding sound. The dithered audio really isn’t more accurate, it just sounds better, which is more important than accuracy. Returning to our opening examples, the moonscape was in dire need of dithering, while the Look-Alikes scenes were already heavily dithered. And the real world in general is heavily dithered.
So, for quantitative research aimed at guiding the design process, the trick is to value meaning above accuracy. Meaning can be gleaned via the noise you can add to the quantitative research process by including metrics outside the direct realm of your topic area. It means considering what else is adjacent to that topic area, acknowledging the importance of respondent indifference as well as their preferences, and recognizing what kind of potentially irrational motivations are behind the respondents’ approach to the topic, or the research itself.
At Method, we’ve developed a technique for observing these perceptual patterns in quantitative data by using perceptions of brands far afield of the category we’re designing for. Essentially, it’s a dithering technique for brand perceptions. This technique often displays an uncanny knack for generating those hiding-in-plain-sight aha moments that drive really useful insights. There are doubtless many other approaches you can employ once you make the leap that acknowledges the usefulness of noise in your analysis.
But no matter what format of research you use in your design development process (including no formal research at all), there are some guidelines you can follow to allow the right amount of useful noise to seep into your field of view, so that your final product does not wind up being missed on the moonscape of the marketplace:
• A LITTLE HUMILITY WORKS WONDERS.
Recognizing that you’re not the center of your target audience’s universe allows you to understand how you fit in. Be sure to take honest stock of just where your target audience places your topic area on their list of priorities.
• STEP BACK FAR ENOUGH TO ALLOW PATTERNS TO EMERGE.
No matter what metrics you’re using, consider looking several levels above them–or next to them–to identify patterns that are impossible to see when you’re too close to the subject.
• GAUGE THE LEVEL OF EXPERTISE OF YOUR TARGET AUDIENCE.
How familiar is your target audience with your subject? Are they experts or novices, and how are you defining that? Generally, the higher the level of expertise, the higher the dependence on pattern recognition. Novices carefully and slowly compare details; experts read patterns quickly and act decisively.
• CHECK THE DATA DUMPSTER BEFORE EMPTYING.
No matter where your data comes from, think about what has been omitted. Was that distracting noise that was tossed, or crucial context?
By taking a look at the entire picture–instead of isolating a single data point–you open up opportunities for understanding the motivations, reasons, and outlying factors that impact data. Contrary to popular practice of stripping out noise, noise is in fact critical to the generation of deep insights that allow us to design better and more effective brands, products, and services.