Type

Consumer Application

Format

Desktop

Area

UX Research; UX Design

Employer

Lexmark

Tools

Figma, Microsoft Copilot

Team

Product Owner, Business Technology, Data Scientists

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Assigning Personality to Generative AI

Type

Type

Type

Consumer Application

Consumer Application

Consumer Application

Format

Format

Format

Desktop

Desktop

Desktop

Area

Area

Area

UX Research; UX Design

UX Research; UX Design

UX Research; UX Design

Employer

Employer

Employer

Lexmark

Lexmark

Lexmark

Tools

Tools

Tools

Figma, Microsoft Copilot

Figma, Microsoft Copilot

Figma, Microsoft Copilot

Team

Team

Team

Product Owner, Business Technology, Data Scientists

Product Owner, Business Technology, Data Scientists

Product Owner, Business Technology, Data Scientists

Background

In the fall of 2023, Microsoft began integrating artificial intelligence (AI) tools into their Microsoft 365 products. Lexmark was quick to begin experimenting with Copilot, a generative AI chatbot that can be used securely with a company’s own data.

We were fortunate to try its beta Agent features that include its ability to go beyond answering questions about the data to actually complete tasks for users. This was an amazing opportunity to play with some of the most promising helper AI tools coming out.

Background

In the fall of 2023, Microsoft began integrating artificial intelligence (AI) tools into their Microsoft 365 products. Lexmark was quick to begin experimenting with Copilot, a generative AI chatbot that can be used securely with a company’s own data.

We were fortunate to try its beta Agent features that include its ability to go beyond answering questions about the data to actually complete tasks for users. This was an amazing opportunity to play with some of the most promising helper AI tools coming out.

My Role

I was the sole designer on the project, working with a stakeholder group that consisted of a product manager, the business technology team who would be implementing, a pilot group of users from the data science group, and a consultant from Microsoft who could guide us on the capabilities of Copilot.

My Role

I was the sole designer on the project, working with a stakeholder group that consisted of a product manager, the business technology team who would be implementing, a pilot group of users from the data science group, and a consultant from Microsoft who could guide us on the capabilities of Copilot.

user

business

technology

Our Approach

As part of the Connected Technology team, I was asked to explore what these features could look like in our internal chatbot. I was immediately inspired by articles I’ve read over the years about users' perceptions of computational effort. In one study, researchers found that when completing complicated tasks, people felt they received more value when they were able to see the work an algorithm was doing when searching options for a flight to purchase.

Our Approach

As part of the Connected Technology team, I was asked to explore what these features could look like in our internal chatbot. I was immediately inspired by articles I’ve read over the years about users' perceptions of computational effort. In one study, researchers found that when completing complicated tasks, people felt they received more value when they were able to see the work an algorithm was doing when searching options for a flight to purchase.

Day 1

Day 2

Day 3

Day 4

Day 5

Problem definition and scoping

Competitive review

Refine idea

Design and build

Review and gather feedback

Choose what to focus on

Brainstorm solutions

Negotiate trade-offs

Create artifacts

Revise design

Present resulting solution

Sketch ideas

Resolve conflicts

Design Sprint
Maximum Value In Minimal Time

As a small project with a short timeline, I chose to work in a design sprint—a short and focused period of one week. This allowed me to follow a modified user-centered design process. I was the only full time resource on the project, so I was able to spend all my time on the project and connect with the other stakeholder groups on an ad hoc basis to present my progress and gather feedback.

Design Sprint
Maximum Value In Minimal Time

As a small project with a short timeline, I chose to work in a design sprint—a short and focused period of one week. This allowed me to follow a modified user-centered design process. I was the only full time resource on the project, so I was able to spend all my time on the project and connect with the other stakeholder groups on an ad hoc basis to present my progress and gather feedback.

How Are Others Solving This Challenge?

Without time or budget to conduct user research, I began with a competitive review to see what common features users would expect to use when interacting with generative AI. It became clear that current models utilize a familiar chat-based interface, allowing the users to interact like a typical conversation. This insight allowed me to focus on designs that turned AI into a conversation partner rather than a new technology.

Should Agents Mimic People?
Pulling Clippy Out Of Retirement

Humans tend to anthropomorphize everything. We give our cars names, make movies about emotional robots, and we see faces everywhere when walking down the street. This need to apply human characteristics to inanimate objects could play an interesting role in how we design generative AI in the near future.

Should Agents Mimic People?
Pulling Clippy Out Of Retirement

Humans tend to anthropomorphize everything. We give our cars names, make movies about emotional robots, and we see faces everywhere when walking down the street. This need to apply human characteristics to inanimate objects could play an interesting role in how we design generative AI in the near future.

So how will we interact with this new technology? Will we be partners achieving goals hand-in-hand? Or will we relegate AI to being subservient helper bots? Would we even want AI to play a leading role in our lives? Although these questions were beyond the scope of my assignment, they helped me create a framework for how the Copilot features could be presented to users.

Exploration

I began exploring concepts by asking how would it affect me if AI was an all-knowing, all-powerful helper? Or what if it were just a group of capable, yet dim-witted assistants? Should it present itself as separate entities with varying skills? Ultimately I settled on a familiar metaphor - the odd couple. Two very different personalities that work together to play off their strengths like C3PO and R2D2.

Exploration

I began exploring concepts by asking how would it affect me if AI was an all-knowing, all-powerful helper? Or what if it were just a group of capable, yet dim-witted assistants? Should it present itself as separate entities with varying skills? Ultimately I settled on a familiar metaphor - the odd couple. Two very different personalities that work together to play off their strengths like C3PO and R2D2.


This was a way to represent the different types of problems AI would solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak, yet you trust it to fly your star-fighter or deliver your most important messages.

Providing this dual approach shaped the way I began to layout the interface, requiring 2 distinct components that need to work together or each by themselves.

This was a way to represent the different problems AI would be trying to solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak but you trust it to fly your star-fighter or deliver your most important messages.

These explorations looked at how the 2 separate helper tools would work together to solve the user's issue. As the conversation builds the agents can be progressively disclosed based on topic complexity.

Result

The final design divided the Copilot feature into 2 separate agents a user would interact with. The main agent would provide answers and guidance to all your questions, while the support agent followed up, asking if it could complete associated tasks for you. After extended use, the AI would would learn and anticipate common questions you and your colleagues have, while taking initiative to do more functions once you’ve become comfortable with its work.


My next actions would be to design a set of prototype tests to get quick feedback on the concepts. It would be fairly easy and economical to set up an unmoderated study on a remote testing site to gauge peoples' attitudes about trust, perception of competence, and ultimately their preferred experience. Over time the study could be updated to test new Copilot skills and user’s attitudes.

Result

The final design divided the Copilot feature into 2 separate agents a user would interact with. The main agent would provide answers and guidance to all your questions, while the support agent followed up, asking if it could complete associated tasks for you. After extended use, the AI would would learn and anticipate common questions you and your colleagues have, while taking initiative to do more functions once you’ve become comfortable with its work.


My next actions would be to design a set of prototype tests to get quick feedback on the concepts. It would be fairly easy and economical to set up an unmoderated study on a remote testing site to gauge peoples' attitudes about trust, perception of competence, and ultimately their preferred experience. Over time the study could be updated to test new Copilot skills and user’s attitudes.

Our Approach

As part of the Connected Technology team, I was asked to explore what these features could look like in our internal chatbot. I was immediately inspired by articles I’ve read over the years. In one study, researchers found that when completing complicated tasks, users felt they received more value when they were able to see the work an algorithm was doing when searching options for a flight to purchase.

Should Agents Mimic People?
Pulling Clippy Out Of Retirement

Humans tend to anthropomorphize everything. We give our cars names, make movies about emotional robots, and we see faces everywhere when walking down the street. This need to apply human characteristics to inanimate objects could play an interesting role in how we design generative AI in the near future.

How Are Others Solving This Challenge?

Without time or budget to conduct user research, I began with a competitive review to see what common features users would expect to use when interacting with generative AI. It became clear that current models utilize a familiar chat-based interface, allowing the users to interact like a typical conversation. This insight allowed me to focus on designs that turned AI into a conversation partner rather than a new technology.

Design Sprint
Maximum Value In Minimal Time

As a small project, I chose to work in a design sprint, or a short, focused, defined period of time of one week. This allowed me to follow a modified user-centered design process. I was the only full time resource on the project, and I was able to connect with the other stakeholder groups on an ad hoc basis to present my progress and gather feedback.

So how will we interact with this new technology? Will we be partners achieving goals hand-in-hand? Or will we relegate AI to being subservient helper bots? Would we even want AI to play a leading role in our lives? Although these questions were beyond the scope of my assignment, they helped me create a framework for how the Copilot features could be presented to users.

Exploration

I began exploring concepts by asking how would it affect me if AI was an all-knowing, all-powerful helper? Or what if it were just a group of capable, yet dim-witted assistants? Should it present itself as separate entities with varying skills? Ultimately I settled on a familiar metaphor - the odd couple. Two very different personalities that work together to play off their strengths like C3PO and R2D2.


This was a way to represent the different problems AI would be trying to solve. One can answer any question or calculate the probabilities of multiple scenarios, while the other doesn’t speak but you trust it to fly your star-fighter or deliver your most important messages.

These explorations looked at how the 2 separate helper tools would work together to solve the user's issue. As the conversation builds the agents can be progressively disclosed based on topic complexity.

Result

The final concept divided the Copilot feature into 2 separate agents a user would interact with. The main agent would provide answers and guidance to all your questions, while the support agent followed up, asking if it could complete associated tasks for you. After extended use, the AI would would learn and anticipate common questions you and your colleagues have, while taking initiative to do more functions once you’ve become comfortable with its work.


My next actions would be to design a set of prototype tests to get quick feedback on the concepts. It would be fairly easy and economical to set up an unmoderated study on a remote testing site like Userzoom or Userlytics to gauge people’s attitudes about trust, perception of competence, and ultimately their preferred experience. Over time the study could be updated to test new Copilot skills and user’s attitudes.