Ho to balance Human and Machine Agency in AI augmented Experiences

Making the case for collaboration with Intelligent systems.

Silvia Podestà
13 min readSep 24, 2023

Throughout its whole history, the digital economy has been flagging up the notion of ´user empowerment´ to backup innovative value propositions and novel experience formats.

This has gone fundamentally in two ways.

One sets out to simplify the user journey, making it swift, linear and smooth. From the convenience of text-autocomplete, to e-shopping experiences where the time between the thought in your mind and the parcel dropped at your doorstep nears the millisecond, this approach is all about providing users with ways to cut down on the steps, to shorten the journey. It sticks up for the archetypical, overstressed Netizen who cries: “My attention is finite, don+t make me think!”.

Just as important, the other approach is premised on the opposite concept, as it empowers users by adding up more responsibilities, more means of exerting control on our user experiences. It is certainly the case of financial apps, which opened up a whole new world of pecuniary autonomy, in an area that would be prior exclusive to financial advisory or banking services, by and large less accessible for the general people. Travel apps empower users to plan their vacations, and in doing so they allow for big savings, because they take out intermediaries from the equation, precisely. Though indeed for travellers, this means more things to directly take care of.

The common denominator between these two approaches is user agency — devolved and economised in the first case, amplified and overstretched in the second. Value is achieved through a process that can be either subtractive or addictive, though it very often involves both, a dynamic not dissimilar to that of social constructs like management or leadership, where power is usually expressed through varying degrees of delegation and active control. In the digital world, delegation rhymes with automation, while active agency gets usually dubbed as “user augmentation”.

I´ve recently spoken about the interplay of user and machine agency in my intervention at WIAD Café last July., when I made the case for some changes of perspective in the design of AI-augmented user journeys, geared towards the notions of human-machine collaboration and co-creation. These notions underpin IBM approach to building AI-experiences, therefore I believe it´s extremely valuable to share these thoughts, while also acknowledging that, given the frenzied development of this new field, some of them could be tweaked or further evolved in the near future.

But let´s begin with the elephant in the room.

What does agency mean and why it matters

The notion of agency has always been integral to the UX discourse. Personally, I´m reminded of it every time my phone lowers the volume when my favourite song is on — it´s for my health, it says.

In 2020, the California Privacy act came up with definition of “dark pattern” which goes “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision making, or choice”. At least 4 out of the 10 usability heuristics are centred around the idea of granting users control over their experience and to support their decision making process through appropriate content and interactions,

And the importance if user agency has also been referenced in studies on online communication effectiveness. One such research investigated the interplay between user engagement, content and algorhitmic power on social media citing that “agency-enhancing affordances on social media, things like sharing, commenting, or basic customisation options can de facto increase the persuasiveness of messages on these platforms.

All of this makes perfectably understandable why worries today are spiking around the ability of users to retain control over their experiences, when ai systems are involved. It´s fair to say that part of these concerns stem from our limited understanding and the shroud of mistery that, for many of us, still surrounds these technology, as David Beer bluntly puts it on BBC´s column, Machine Minds.

For technologist who work with AI, the perspective is a more pragmatic one. AI is primarily a tool with an incredible potential to augment human capabilities and improve business processes, eliminating inefficiencies and providing value in many forms, from the generation of key knowledge and insights to critical cost-savings and better enterprise UX(1).

But indeed the question whether and to what extent more intelligent and autonomous digital systems are going to erode our decision making is legit and a loaded one, intertwined as it is with ethical implications and issues around regulatory and compliance.

Another way to look at the concept of agency is from an anthropological perspective. Artificial Intelligent systems bring to the scene non-human actors. How human will relate to these entities, how user journeys will unfold throughout the interplay of human and artificial agents is a matter of debate and possibly one of designs biggest challenges as of today.

Kim Bartkwowski — Service Blueprint for the Cognitive Enterprise- This tool is used by design teams in IBM to effectively account for the interplay between human and artificial agents.

From an enterprise and customer experience perspective, this relationship is crucially a matter of trust and trust building. Studies on Voice Assistants for instance suggest that consumers’ beliefs of trust in the assistant influence their satisfaction with the shopping decisions(1).

The problems in designing for AI.

Ai is set to complicate things for designers in many different ways. In the last decade intelligent systems have basically kept their head down, while quietly revolutionising our economy form behind the scenes. Agentive technologies have taken on enterprise processes, streamlined workflows and made activities easier and more efficient, accelerated the retrieval of information in knowledge and research fields, anticipating disruptions (with use cases including predictive maintenance, risk mitigation, fraud detection) and obviously jazzed up shopping experiences through recommendations. In many of these instances we are worlds´away from a Chat GPT, as the agentive technology is practically invisible, often missing an end user interface in the canonical sense of the notion.

To make AI an “agentive actor” is problematic from a design perspective because it really seems to “throw off many pf the assumptions that we+vbe taken for granted in them last 40 years of designing software(2)”.

First of all Ai is probabilstic. This is problem because it produces what we call generative variability, where same prompt, or imput can produce each time a different outcome. This goes against the idea of (high-level invariance) which has always been considered intrinsic to good information architecture because provides consistency.

Additionally, AI is time bound — one common analogy of AI is to toddlers,as artificial systems learn from the environment over time. As the model and its knowledge of the world change, outputs can change as well, as I´ll explain later.

Another problem lies in potential biases of the data and the model, which brings in the issues of accounting for transparency and trustworthiness design process of ai augmented experiences.
And, as already mentioned, ensuring human agency in key points of the journey is another huge matter.

Changing our approach to AI-augmented experiences: collaboration and balance

The first thing that’s worth considering is that AI is a tool, a tool meant to potentiate human capabilities rather than displacing human — a cornerstone or Human -centric AI.

A second, crucial consideration to make is the Balance of agency. Throughout my work, I´ve seen how user autonomy and control can prove to be a double edge sword when it comes to maximising a system performance. That´s where experience designers mostly need to have a grip on.

Let´s say that you were designing a recommendation system that could help an internal team optimise decisions with a direct impact on inventory management. Said system would work out the best possible combinations out of a huge pool of products, to create bespoke bundles to send to customers, optimising suggestions as for best price point, products expiration date and other key parameters. Crucially, were the system suggestions to be excessively tweaked and edited, the tactical significance and accuracy of the algorithm would have been impaired. To carry out its best job, the solution would need to be taken at face value, with users´personalisation kept to an acceptable minimum.

To me this real-case example perfectly shows how human-machine agency are tightly weaved in within the fabric of AI augmented experiences. The purpose of design is to foster a mutual collaboration, and trace a journey where users can leverage the ability of the technology as if it was their best collaborator.

Focal points to design for a healthy co-agency

If the idea of digital experiences as collaborative spaces, rather than simple places made of information(2), is to gain currency, the goal of designers will be to ensure that the AI system can perform its intended tasks effectively and efficiently while being easy for humans to use and understand.

The following are my essential quickies to start to intentionally approach this exciting new field. Before diving in straight, To fully understand the spectrum of interventions that designers and information architects are expected to bring to bear it´s important to get a sense of how the information architecture of an AI system looks like . In this case I´m using information architecture in a slightly different meaning than what is intended with Information architecture in the design realm, here the term has a more technology connotation.

In the IT context information architecture is a kind of a conceptual framework which contains the building blocks of an information layer which supports business apploications and end user experiences.

If the idea of digital experiences as collaborative spaces rather than simple places made of information is to gain currency, the goal of designers will be to ensure that the AI system can perform its intended tasks effectively and efficiently while being easy for humans to use and understand.

It´s essential that designers and information architects get tehir head around the behind-the-scenes, which powers everything that translates into a user interaction. This implies an understanding of how IT systems and data flows looks like.

Now, to the quickies.

1.Reinforce Human Agency in the Search experiences

One of the things AI is best known for lies in its power to support user retrieval of information and research, particularly on large collections of documents data and information. A typical use case: call center agents rely on a knowledge base to help them quickly answer common customer questions but often struggle to sort through the extensive resources available. A solution to this is to deploy generative AI, foundation models, and machine learning capabilities that can generate accurate and useful responses for frequently asked questions.

As this example illustrates the value of AI lies in supporting a particular kind of information need, that is finding the right answer — or a few good answers. As AI accelerates information retrieval, by nature it lends itself less to support exhaustive research patterns (to search for everything) — that would simply not be the use case you may want to implement the AI in the first place. However, one good question you need to ask yourself as a designer is: how much narrowed the retrieval should be? how many information seeking patterns should my design support? The answer to this question can lead to different UX and layout propositions.

In some cases, you might opt for a soft transition into a new kind of experience because you don´t want to disrupt entrenched users´habits too abruptly. Consider that your users may be familiar with a certain way of getting jobs done. In one recent project for a media organisation my team tried to improve journalist´s newsroom workflows, by helping them retrieve and select the most relevant documents to craft compelling articles. The idea was that the AI would help scout a huge public archive, thus replacing journalist´s taxing labour of checking each every document to detect potentially juicy content.

Having an artificial intelligence to crunch the mighty archives and summarise the long source documents (using IBM´s new platform Watsonx) tremendously sped up their creative process and gave users time to work on the idea behind the piece. In terms of design and UX decisions, we were fronting a dilemma: should the interface display only a selection of what the system deems to be the most relevant documents? will user trust the system enough to serve them the best content or would they rather carry on with their habits — to not leave any leaf unturned?

When discussed directly with future users, questions like the one above can help frame a course of action. In the media newsroom case, users were accustomed to peruse a table — a UI component supportive of quick comparisons and scanning — and to hastily glance through the rows, to make sure they wouldn´t leave important documents unchecked. In the new situation, ideally we would want to spare them that laborious and error-prone process. But while a selection of relevant search result could be better accommodated in rich-media cards, we considered a transitional layout, where highlighted results co-existed with the more familiar table.

While being inclusive of all types of information-seeking behaviours, this in-between set up may work better when users´trust towards the new system is still developing — the lesser we trust someone the hardest it is for us to delegate. Another thing to consider is that, as the system gets better and better, its result will likely change, in terms of content, accuracy, granularity of insights ..etc. Your layout may evolve accordingly, to express and account for the increased capabilities of the AI.

2.Provide users with means for feedback and to fine-tune results

Lets´s focus on the outcomes gof the AI. One thing that balances human and machine agency is to provide users with the means to act upon results. This is particularly relevant in the context of generative AI. You should consider two different modes of user agency modalities:

a)Explorative mode: this helps users explore a space of possibilities, a variety of outcomes, helping them try out and play around different things. Dall-e´s filter gallery and the surprise me both in this direction of playful experimentation.

b)Fine tuning mode: it’s when users get more into the detail of the outcome or the artefact that has been generated by the AI with the purpose of improving it, or making it definitive. Guthub´s Co Pilot or the GPT-powered editor that helps fix scripts on video platform ElAI are an example of tools supporting this very modality.

(Source: UX for AI, IBM).

Feedback too is really important. For instance, what better way for users to exert their agency, than giving them the opportunity to influence the development of the AI model?

Proposed cards for AI-generated summaries through WatsonX. Users can vote the relevance of the content through a thumbs up/thumbs down mechanism, thus educating the evolving system.

3.Include AI explainability into your design

Explainability is a paramount notion in AI design, pivotal for creating systems trustworthy, ethical and human-centric and possibly the biggest human empowerment of all.

Essentially, it is about letting users comprehend why a machine has taken a certain decision or produced an outcome, providing them with knowledge and autonomy they need to make better decisions about whether to trust an AI model and eventually the option to challenge or refuse a particular outcome. People are entitled to understand how AI arrived at a conclusion, especially when those conclusions impact decisions about their employability, their credit worthiness, or their potential. Needless to say, the explanations provided need to be easy to understand.

From an organisation´s perspective, the bottom line of Explainability sits on its power to build trust and confidence when putting AI models into production and to mitigate compliance, legal, security and reputational risks.

So the job for designers designers is to translate the model explainability into the user experience and incorporate agency-enhancing affordances in the design, that make clear why a decision has been taken and what users can do about it.

In theTitanic, an interesting research project by IBM Academy of Technology, researchers explored the various facets that must be considered when thinking about rendering an AI model explainable — not just from a tooling perspective but also the user experience that ultimately empowers.. In this thought experiment, they imagined a fictitious scenario and an extreme use case, where an algorithm could determine the likelihood of Titanic Passengers to get a life raft on the sinking liner.

One key insight was that different personas have different needs for explainability. One effective way to look at this is the onion analogy: explainability has a multi- layered information system in its own right, where different personas can access different content, data and information that are necessary to them to make informed decisions around the Ai outputs, following a sort of progressive disclosure pattern.

Consider the difference between a data scientist and a regulator for instance and a private consumer who’s been affected by a decision. There are different degrees of understanding and detail that you may want to include for each of these different personas.

Another important thing to consider when planning for explainability, is that different business and knowledge domains have different needs — and the content of the explainability assets may vary from only text, to a combination of text+ visual elements or even audio/rich-media.

Hybrid explainability of the AI output (visual + text annotations) preferred choice by users in HealthCare settings — Source: **Source: The explainability paradox: Challenges for xAI in digital pathology

Different domains (ex. Healthcare, Finance) require different explainability assets (visual, texts, hybrid, audio…

One thing that struck me is the compelling evidence around the effectiveness of conversational elements, when pursuing explainability online. I find an interesting short- circuit between explainability as a means to balance human-machine agency, and the fact that more interactivity (through conversational features) seem to be by far the most formidable tactic to make an AI system explainable (Source:ALLURE*: A Multi-Modal Guided Environment for Helping Children Learn to Solve a Rubik’s Cube with Automatic Solving and Interactive Explanations). It confirms — if still there was any need- the natural meridian that runs between human and machine arbitration in valuable AI augmented experiences.

References:

  1. The Role of Trusting Beliefs in Voice Assistants during Voice Shopping (DOI:10.24251/HICSS.2021.495)
  2. UX for AI (IBM design 2023)

--

--