ATARC AIDA Guidebook - FINAL 62
ATARC AIDA Guidebook - FINAL 62
ATARC AIDA Guidebook - FINAL 62
Once the initial rounds of interviews have been conducted, the project team will gather to
analyze the findings from the interviews. Guided by their User-Centered Engineer, the team will
then prepare for a Design Thinking Workshop. This workshop will:
Once the team has analyzed the research, they converge on their approaches to the soluctions.
This means synthesizing all that was learned from the stakeholders and the others impacted by
the design of the AI and bring it all together.
27
Google PAIR, “User Needs + Defining Success Chapter Worksheet” in People + AI Guidebook, May 2019.
https://pair.withgoogle.com/worksheet/user-needs.pdf
28
B. Shneiderman, Human-Centered AI, Oxford University Press, 2022
29
Google PAIR, “User Needs + Defining Success Chapter Worksheet”
Page 41
Artificial Intelligence and Data Analytics (AIDA) Guidebook
• The front-end interface that allows the users to interact with the system
• The back-end system which houses the database, the algorithms that power the system
and other supporting technology
Once the team has an idea of the problem, they can begin the solutioning process, moving into
the second diamond of the Double Diamond model.
For the interface, the User-Centered Engineer can walk the stakeholder group, with at least two
potential users of the system along with the rest of the user experience team, through a Design
Studio Workshop. These workshops are typically two-to-three hours, with additional time if
there are new folks to be brought up to speed on the decisions made during the initial
workshop. (These should not be confused with Design Thinking workshops—Design studios are
focused on creating the interface the users will see.) The User-Centered Engineer will then
create several versions of the interface based on the outcomes of the workshop.
Human-Centered Metrics
Beyond typical usability testing procedures, we recommend the project team take advantage of
latest developments in human-machine teaming research to identify additional metrics for
success. These metrics can act as a “north star” and drive discussions early in the process.
30
E. Frøkjær, M. Hertzum, and K. Hornbæk, “Measuring usability: are effectiveness, efficiency, and satisfaction
really correlated?,” in Proceedings of the SIGCHI conference on Human Factors in Computing Systems, New York,
NY, USA, Apr. 2000, pp. 345–352. doi: 10.1145/332040.332455
Page 42
Artificial Intelligence and Data Analytics (AIDA) Guidebook
Page 43
Artificial Intelligence and Data Analytics (AIDA) Guidebook
As the designers and builders of the system communicate with the stakeholders, they need to
include the ongoing HCAI attributes. By addressing these considerations upfront, the
stakeholders that may be resistent to change can be assured that the team is following an
approach that builds trust. By demonstrating the AI system as it is built and including these
considerations in the demonstrations, stakeholders can be confident that the team is covering
all the bases in buidling a trustworthy AI system.
For further consideration, MITRE’s HMT Systems Engineering Guide identifies the following
HMT leverage points that can inform success (see the link below for definitions):31
• Observability
Example: Computer Vision
• Predictability
Models designed to translate visual data
• Directing Attention
based on features and contextual information
• Exploring the Solution Space
identified during training. This enables
• Adaptability models to interpret images and video and
• Directability apply those interpretations to predictive or
decision making tasks.
• Calibrated Trust
Tools are available for evaluating some of these human-centered metrics. MITRE’s Calibrated
Trust Evaluation Toolkit (https://comm.mitre.org/calibrated-trust-toolkit/), for example, helps
ensure that users’ expectations match the system’s actual capabilities. Value cards can be used
to estimate whether models would be accepted or preferred by various stakeholders (Shen et
al. 2021). There are also a variety of tools for exploring the fairness of machine learning models.
31
https://www.mitre.org/sites/default/files/publications/pr-17-4208-human-machine-teaming-systems-
engineering-guide.pdf, pp. 1-3.
Page 44
Artificial Intelligence and Data Analytics (AIDA) Guidebook
Page 45
Artificial Intelligence and Data Analytics (AIDA) Guidebook
and effort in this definition of the end state, in communication with staff, and in training staff
for new roles and responsibilities.
The high-level approach for AI-centric organizational change management is to build on initial
successes, advantage AI system deployment owners to expand their footprint, and create a
general awareness and understanding of AI systems’ success across the enterprise. Awareness
expansion will be driven by governance, communication, and training. The following steps are
typical for organization change management, although every implementation will require
solutions tailored to a specific focus:
1. Clearly define the change and align it to business goals
• Understand the ‘Why’
• Understand the ‘How’ and assess feasibility and preparedness
2. Create a roadmap to understand the current and future goal state after
implementation of the change
3. Identify the leadership and implementation teams to begin assigning roles and
responsibilities
4. Identify the impacts and all individuals to be affected by implementation
5. Develop a communication structure and plans for training and onboarding for the
desired changes
6. Move forward with implementation – ensure consistent communication with the
implementation team throughout the process to work through roadblocks as they
emerge
7. Set up a structure to measure success, identify challenges, and assess the change
management process overall. Utilize this structure during and after implementation
to measure performance and identify key areas for improvement or future growth
Page 46