This is a working draft of some thoughts concerning design I’ve been throwing around in my head for a while now. In my undergraduate curriculum, we constantly talk about how design will change the world but we rarely critically examine our practice. Special thanks to Dan Lockton for all his help and support.
Opacity in Interfaces and Relationships
Visual interfaces mediate a user and their goals. Interacting with an interface triggers another action; clicking an icon opens a file, for example. What is not always immediately obvious to the user, however, are the activities occurring beneath the surface. As such, interfaces are, for the most part, opaque. They are inherently a representation or abstraction of some other process. Clicking the “Place your order” button on the Amazon checkout page triggers a flurry of activities: money is exchanged, a warehouse employee has a new order to fulfill, a shipping label is printed, your suggestions algorithm is updated. All the consumer notices is an email notification, a line on their bank statement, and a package a few days later. By choosing to expose or conceal certain processes, designers wield considerable influence and control over a user’s perception. There is often an information asymmetry between user and service provider that is arguably insufficiently addressed by current design practices.
Digital products have immense power in their efficiency and flexibility. Instead of walking out to a busy street to hail a cab, one can be guaranteed a ride without stepping out of their home. The ease and convenience of digital often betrays the complex underlying systems and infrastructures that support it. The immediate convenience of tapping a button to get a ride is hard to deny, but what is significantly less obvious is the impact on local transit systems and working wages. Everything that exists and happens in the digital world has some effect in the physical world. That isn’t to say that on-demand rides shouldn’t exist or getting a ride ought to be a difficult and inconvenient task, but, rather, designers should be asking not only how to create a pleasant and streamlined experience but also how to encourage users to critically evaluate their actions and choices. How can critical thought be integrated into a seamless experience? It is necessary to understand and question what is communicated to the end user. Admittedly, to ask users to contemplate systems and understand their own role within them is a tall order to, but if individuals are never given the opportunity to see and understand underlying processes, it is nearly impossible for them to critically evaluate it. Lou Downe writes, “When we don’t know how a thing works we make it up,” and uninformed assumptions can only lead to unexpected misunderstandings and mistakes.
Usability and efficiency are generally the standards used to evaluate digital products and experiences. Different methods for measuring and quantifying usability exist, but none begin to address transparency. Transparency is generally understood to be about openness– the ability to see all decisions being made. It may be beneficial while designing to think of transparency as the clarity of communication between user and provider. What does the provider know that the consumer does not? What does the consumer understand about the provider? If there is an information imbalance, can the relationship become more balanced? Design should, by default, be questioning relationships, Matt Wade explains. Why does a particular consumer/ provider relationship exist in its current form? Can it be modified? If the system a problem is born out of is not challenged, as Anthony Dunne and Fiona Raby point out, it is very likely the solution will only continue contributing to the system. Transparency can begin to encourage change in the system.
Absolute transparency is not necessarily the answer, but designers need to consider what is or is not exposed to the end user and the justification behind such decisions. A button does not just lead to another screen. Data is captured, information is exchanged, a series of other actions are set into motion, often unbeknownst to the user. What actions can users take on a particular screen? What happens when they interact with said screen? What do they see or not see?
There are plenty of visualization techniques, such as service blueprints and user flows, that begin to answer these questions surrounding visible touchpoints, but they mostly enable the designer to analyze and optimize each interaction. They do not adequately address the transparency of interactions and relationships. In order to design for transparency, it may be helpful to use the association map pictured below. Focusing on invisible versus visible and abstract versus literal orients the product and gives the designer a space to begin questioning. Whether used by the designer or the consumer, visualizing where the product sits on the coordinate plane allows for insights into the transparency of a design.
The value of considering whether a product is invisible or visible to evaluate transparency is fairly straightforward; if something can be seen by consumers, they can react accordingly. Depending on the context, the designer may want some processes to be visible, others to be invisible. Seeing hotel staff deliver fresh dry cleaning is indicative of a luxurious experience, but guests might not be as enchanted by the actual dry cleaning process done somewhere in a factory.
One way to think about creating visibility and transparency is through intentionally designing glitches. Jonathan Hanahan writes, “The designer should no longer be required to hide and destroy glitches, but design them outright, crafting intentional disruptions to experiences as means to reveal and understand their ramifications.” The glitch is a moment of hypervisibility where the previously invisible is pushed to the forefront, encouraging the participant to reevaluate their experience. For example, in the case of online shopping, in the U.S., Amazon is the go to e-commerce site for many because of convenience, speed, and price. Third party sellers are able to sell their wares on the platform, but the buyer never leaves the Amazon ecosystem. Even though the order is fulfilled by a the third party, from the buyer’s point of view, it’s a seamless Amazon experience. However, one particular purchase from a third party seller stands out because of a glitch in an otherwise seamless experience. Housing Works is a nonprofit based in New York City whose model includes selling used books on Amazon to fund services for those living with and affected by HIV and AIDS. Slipped in each book sold is a bookmark, explaining the organization’s mission. The bookmark disrupts the typical Amazon purchasing experience while leaving the core experience intact; the consumer is reminded that they are not purchasing a book from Amazon but from Housing Works. Knowing that their dollars are funding a charitable cause, consumers may shift their buying practices. By increasing their visibility, Housing Works has unintentionally practiced seamful design.
However, visibility alone is an insufficient parameter in evaluating a design’s transparency, as visibility and legibility are not necessarily equivalent. Abraham Moles makes the distinction that legibility, unlike visibility, grants one agency, the ability to take action. Terms of service agreements, for example, are visible to the user. However, they are not always legible. Asking whether a design is literal or abstract begins to provide insights one’s perception of the product: How well does a website’s terms of service represent the relationship between a user and service provider? What does each party understand from it? What does each party have to gain or lose from it? Understanding the user’s perception of a design or product is inextricably linked to understanding the relationship formed between stakeholders. It doesn’t take technical knowledge to use a digital product but not fully understanding the mechanisms behind a product and relationships created or reinforced by a product makes it difficult to properly and thoroughly evaluate decisions. If users think of the cloud as someone else’s computer or a distant remote server and not some floating omnipresent mass, how would their interactions with the cloud change? Would users be more careful about what they choose to leave on cloud storage? Would users evaluate the trade offs between convenience and security more? By understanding how consumers perceive the cloud, designers can better wrestle with the implications of designing “for the cloud.”
Visibility is not necessarily better than invisibility, and literal is not better than abstract by default. These binaries exist to simply assist in evaluating a design. Which end is preferable is left to the designer’s discretion. In the examples below, I presented a series of words related to technology to a students at CMU of various disciplines, including art, computer science, and design, and asked to place them on the map, based on their understanding of the word. Their explanations, shaped by their unique experiences and knowledge, shed light on how technology is perceived and present potential design interventions.
A describes surveillance as “murky.” A placed it towards invisible. Though one may be able to see the surveillance camera, who is behind the camera is not always obvious. It is unclear what data is exactly being collected and how it is being used. The surveilled is only left with a sense of paranoia. The singular camera becomes the visible touchpoint that represents systems of ownership, capitalism, and control, among others, that enable and support surveillance as a common practice.
B also noted surveillance as invisible, but closer towards neutral, as she felt that surveillance is becoming even more invisible. In a traditional sense, surveillance through a CCTV camera is quite literal and visible, but surveillance is now less about capturing one’s likeness on camera in a certain location. She described surveillance as “how accessible you are to whoever’s trying to find you.” Photo metadata, social media check ins, search engine histories and more can all be used to paint a fairly robust profile of an individual’s identity, activities, and habits. However, as demonstrated by burglars or drone strikes, the accuracy of these assumptions varies.
Designers have an opportunity to increase surveillance visibility to properly express the relationship between the parties involved. Depending on the context, designers could be revealing the types of surveillance that occur (facial recognition technology, wiretapping, biometrics) or who is performing the surveillance (corporations, governments, schools), and the implications of using such technologies. Solutions in this space might not necessarily come from those designing surveillance systems, but from designers taking on a more activist roles.
A is a computer science major and placed “machine learning” towards “invisible,” as, in her opinion, machine learning is code. Code is essentially invisible to the end user; people just see the program they use. B is placed just above invisible because we often interact with products utilising machine learning, whereas C is invisible because she felt it is not always obvious when we are interacting with the products of machine learning.
A felt that machine learning was “abstract” because at the end of the day, “literally it’s an algorithm.” She also noted that machine learning is a bit of a buzzword, and acts as a keyword or stand in for various technologies. B and C felt that it was quite literal; the machine is learning. This discrepancy is rather telling; “algorithms” and “learning” have very different meanings. “Algorithms” are a set of rules , while “learning” tends to suggest growth and change.
By focusing on the machine “learning,” the individual or team that wrote the algorithm has essentially been rendered invisible and is abstracted by the machine; the human element is erased. We can select which datasets to feed an algorithm, what parameters to train it on, but how the algorithm made a decision is concealed. Many aspects of our lives, from things as seemingly trivial as the order of content on social media newsfeeds to something as important as an loan application, are decided by machine learning outcomes. When election outcomes can be swayed by machine learning, shouldn’t the general public be aware of how their realities are being constructed and curated by such technologies?
There is plenty of technical literature on evaluating machine learning algorithms, but how can the general public begin to understand how machine learning decisions are made? If the focus is placed on “learning”, we could use Bloom’s Taxonomy, a classification of learning objectives, as a starting point. It might not seem particularly appropriate to use a human model to evaluate machine behavior, but machine learning is often performing tasks previously completed by humans. Machine learning fulfills some aspects of Bloom’s Taxonomy but outright fails others. By focusing on improving and emphasizing these aspects, designers can potentially bring more transparency to machine learning, partially by abstracting it into a more human model. Metaphors are necessary to aid in explaining and understanding, but we need to consider what these reductions inadvertently assume or omit.
David Cole notes that interface design has more or less reached maturity. Naturally, interfaces for various products begin to resemble one another, which is not necessarily bad. After all, utilizing common UI patterns does allow for greater ease of use, but if design is to going to claim to be human centered, designers need to go beyond human factors and usability and be sensitive how a product will impact its users, the relationships being created and perpetuated, and the agency of the user. It is necessary to consider the transparency of ties between the user and the service provider, ensuring users have the agency to critically understand evaluate said relationship. As described by Dan Lockton, through their practice, designers are able to:
- understand the world
- understand people’s understandings of the world
- help people understand the world
- help people understand their agency in the world
- help people use that agency in the world.
Part of helping people understand their agency in the world is to integrate transparency into designs. Transparent relationships and seamful design are not always synonymous, but by highlighting seams in a service or experience, designers can encourage users to assess and better understand their worlds.