Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 15

Modeling and Understanding Human Routine Behavior

Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, Anind Dey

To cite this version:


Nikola Banovic, Tofi Buzali, Fanny Chevalier, Jennifer Mankoff, Anind Dey. Modeling and Un-
derstanding Human Routine Behavior. ACM CHI Conference on Human Factors in Comput-
ing Systems 2016, ACM, May 2016, Santa Clara, California, United States. pp.248 - 260,
10.1145/2858036.2858557 . hal-01416119

HAL Id: hal-01416119


https://hal.inria.fr/hal-01416119
Submitted on 14 Dec 2016

HAL is a multi-disciplinary open access


L’archive ouverte pluridisciplinaire HAL, est
archive for the deposit and dissemination of sci-
destinée au dépôt et à la diffusion de documents
entific research documents, whether they are pub-
scientifiques de niveau recherche, publiés ou non,
lished or not. The documents may come from
émanant des établissements d’enseignement et de
teaching and research institutions in France or
recherche français ou étrangers, des laboratoires
abroad, or from public or private research centers.
publics ou privés.
Modeling and Understanding Human Routine Behavior
Nikola Banovic1, Tofi Buzali1, Fanny Chevalier2, Jennifer Mankoff1, and Anind K. Dey1
1
Human-Computer Interaction Institute, CMU 2
INRIA
Pittsburgh, PA, 15213, USA Lille, France
{nbanovic, tofi, jmankoff, anind}@cs.cmu.edu [email protected]
ABSTRACT
patterns, or even low-level tasks, such as how they operate
Human routines are blueprints of behavior, which allow
their vehicle through an intersection. Routines, like most
people to accomplish purposeful repetitive tasks at many
other kinds of human behaviors, are not fixed, but instead
levels, ranging from the structure of their day to how they
may vary and adapt based on feedback and preference [16].
drive through an intersection. People express their routines
A key aspect of being able to understand and reason about
through actions that they perform in the particular situations
routines is being able to model the causal relationships
that triggered those actions. An ability to model routines
between people’s situations and the actions that describe the
and understand the situations in which they are likely to
routines. We refer to frequent departures from established
occur could allow technology to help people improve their
routines as routine variations, which are different from
bad habits, inexpert behavior, and other suboptimal
deviations and other uncharacteristic behavior that do not
routines. However, existing routine models do not capture
contribute to the routines. An ability to model routines and
the causal relationships between situations and actions that
their variations, within and across people, could help
describe routines. Our main contribution is the insight that
researchers better understand routine behavior, and inform
byproducts of an existing activity prediction algorithm can
technology that influences routine behavior and helps
be used to model those causal relationships in routines. We
people improve the quality of their lives.
apply this algorithm on two example datasets, and show
that the modeled routines are meaningful—that they are Studies of routines often characterize routines in terms of a
predictive of people’s actions and that the modeled causal series of actions, typically derived from large activity data
relationships provide insights about the routines that match sets. Data mining algorithms may automatically extract
findings from previous research. Our approach offers a patterns from such data (e.g., [7, 14, 15, 26, 35]), while
generalizable solution to model and reason about routines. visualization can help researchers to interrogate the data
(e.g., [2, 31, 40]). However, these existing approaches do
Author Keywords
Inverse Reinforcement Learning; Markov Decision Process. not model the causal relationship between situations and
actions. This makes it difficult to study and explain which
ACM Classification Keywords situations or contexts, defined as the environmental
H.5.m. Information interfaces and presentation (e.g., HCI): information relevant to an individual’s current activity [13],
Miscellaneous. trigger which routine actions. A different approach is to
INTRODUCTION detect [8], classify [18] and predict [5, 22] activities from
Routine behavior defines the structure of and influences large data sets. Although such approaches imply the
almost every aspect of people's lives. Routines are defined causality between the contexts and actions, it is difficult to
by frequent actions people perform in different situations extract meaning about routines from models learned using
that are the cause of those actions [17]. Routines are a type those algorithms and understand the reasons why they make
of purposeful behavior made up of goal-directed actions certain classifications and predictions [25].
[37], which people acquire, learn and develop through To address those issues, we present a novel approach to
repeated practice [27, 34]. As such, good routines enable automatically extract and model routines and routine
predictable and efficient completion of frequent and variations from human behavior logs. Our approach
repetitive tasks and activities [30]. Routines can describe supports both individual and population models of routines,
people’s daily commute, their sleeping and exercising providing the ability to identify the differences in routine
behavior across different people and populations. Our main
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB contribution to modeling routines is our insight that the
BBBBBBBBBBBB byproducts of MaxCausalEnt [46], a decision-theoretic
algorithm typically used to predict people’s activity,
1 %DQRYLF 7 %X]DOL ) &KHYDOLHU - 0DQNRII $ . 'H\ actually encode the causal relationship between routine
0RGHOLQJ DQG 8QGHUVWDQGLQJ +XPDQ 5RXWLQH
%HKDYLRU actions and context in which people perform those actions.
,Q &+, 3URFHHGLQJV RI WKH 6,*&+, &RQIHUHQFH RQ These causal relationships allow for reasoning about and
+XPDQ )DFWRUV understanding of the extracted routines.
LQ &RPSXWLQJ 6\VWHPV $&0 0D\
$XWKRUV 9HUVLRQ Using two different existing human activity data sets, we
BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB evaluate the ability of our approach to extract different
BBBBBBBBBBBB
types of routines from diverse types of behavior: people’s of duration of different events [2], visualizing family
daily schedules and commutes [11] and activities that schedules [11], and representing events related to patient
describe how people operate a vehicle [18]. We show that treatments [31]. More advanced timelines enable the user to
the extracted routine patterns are at least as predictive of specify properties of the timeline for easier viewing. For
behaviors in the two behavior logs as the baseline we example, Spiral Graph [39] aligns sequential events on a
establish with existing algorithms. Next, we recruited spiral timeline using a user-defined period. However, due to
researchers that work with human activity and routine data the complexity and size of behavior logs, simply visualizing
to verify that patterns extracted using our approach are raw event data does not guarantee that the user will be able
meaningful and match the ground truth reported in previous to find patterns of behaviors that form routines.
work [11, 18]. For the purposes of this task, we developed a
tool that enabled the researchers to visually explore and Other visualization approaches enable users to find patterns
compare the routines extracted using our approach. in time-series data by manually querying and highlighting
different parts of behavior event sequences [9, 41, 42] or
Our first set of results show that our approach enables manually aggregating common sequences of events based
extraction of a reasonable set of human readable patterns of on features in the data [20, 28] until meaningful patterns
routine behavior from behavior logs. Our second set of emerge. The user is then able to judge the quality and
results demonstrate that the models of causal relationships saliency of the patterns by visual inspection. However,
provided by our approach can help researchers explore, during the early exploratory stages, users might not always
understand, and form new insights about human routines know what features are important and contribute to routine
from behavior logs without having to manually search for behavior patterns, making manual exploration challenging.
those patterns in raw data. Another benefit of our approach
compared to existing systems is that it promises efficient One major limitation of existing visualizations lies in their
automated prediction and reasoning about routines even lack of support for both context and actions. Rather, they
under uncertainty that is inherent in human behavior. We focus on isolated events, or temporal evolution of a
discuss how such models and knowledge about routines can particular state (e.g., sleep vs. awake) or variable (e.g., the
inform the design of novel systems that help people amount of steps walked per day). Visualizing both context
improve their behavior. and actions is challenging partially because even advanced
interactive visualizations have difficulty in visualizing
UNDERSTANDING HUMAN ROUTINE BEHAVIOR routine patterns that depend on multiple heterogeneous
Many different stakeholders may care about understanding variables, especially as the number of variables grows.
routine behavior. For example, a clear picture of the many
aspects of routine behavior can help researchers to generate Automated routine extraction and summarization is another
theories and models of human routine behavior. Designers option for exploring routines in behavior logs. However,
may build on such theories to design technologies that help patterns extracted using the existing methods often do not
people to improve their routines [11, 12]. Models of human include important aspects of routine behavior. For example,
routines may be used for prediction and automation [43]. T-patterns [7, 26] can automatically find recurrences of
Individuals may wish to reflect about their own routines for events in behavior logs, but do not explicitly capture the
better understanding and supporting behavior change [23]. contexts and actions. Other methods, such as Topic Models
[15], include features that describe both context and actions,
Although routine behaviors result from low-level cognitive but without modeling the structure of possible variations
plans, which can be modeled using existing cognitive from those routines. Methods based on Hierarchical Task
architectures (e.g., [4]), such models are difficult to apply in Networks [24] and Eigen decomposition [14, 35] capture
understanding high-level routine behaviors. Thus our focus the structural components of the contexts and actions, but
in this work is on how routines are expressed in actions do not explicitly model the causal relationship between the
people perform in different contexts. two that defines the routine [17]. While these algorithms are
Through an analysis of stakeholder use cases, and a review helpful for extracting routine, they are not sufficient for
of the literature, we derive three research questions that helping to understand routine behaviors.
these stakeholders, and in particular researchers, who This leads to our first research question that researchers
analyze routines may wish to answer in order to meet their want to answer: RQPat: How can we expose the full
goals. We review existing approaches to understanding complexity of routine behavior patterns? To extract and
routines based on how well they answer those questions. understand routines, it is critical to discern which features
Current Approaches to Understanding Routine Behavior of the context influence routine actions and which features
Visualizing data from behavior logs is a common way for of the actions people have a demonstrated preference for.
researchers to identify routines. Logged behavior data is
Where Routines End and Variations Begin
often visualized on a timeline as a sequence of events. The How people respond to routine variations is also an
simplicity of this approach makes it applicable to a variety important part of routine behavior. Human routine behavior
of domains, such as schedule planning to show uncertainty is not static, which makes variations from the routines
inevitable [16]. Variations often result from new contexts or Existing algorithms can, of course, be applied to individual
unforeseen circumstances for which people have not yet or population models of routines. For example, a standard
found a routine [11], or occasions when people want to strategy is to build a model for each population and then
explore alternative ways to accomplish their tasks [16]. For compare them to establish the differences and similarities
example, parents may vary their routine when their child between the populations. Population models can then be
has a new scheduled activity that is not part of their current visualized using existing approaches: Hierarchical Task
routines or when they unexpectedly have to pickup their Networks [24] can be visualized using existing Probabilistic
child from an existing activity [11]. Understanding Context-Free Grammar-based tools (e.g., [10]), and Arc
variations is also important to understand the tradeoffs Diagrams [38] can visualize temporal patterns extracted
between different behaviors that people have. For example, using T-patterns [26]. Those population models can then be
in designing systems that help people comply with their coordinated and displayed across multiple views [33].
gym routine, it is important to understand what other However, such visualization techniques are not yet widely
activities cause people to depart from their gym routine. adopted. Finding differences in models is still often based
However, existing machine learning algorithms (e.g., [5, 8, on intuition and expert domain knowledge, and limited to
18, 22]) purposefully disregard variations in human visually comparing the distributions of feature values.
behavior to focus on classifying and predicting only the This leads to our third question researchers want to answer:
most frequent human activity. Also, some variations may RQComp: How can we support comparison of routines
happen infrequently in data and are difficult to detect using across individuals and populations? Therefore, we need to
those existing algorithms. Some specific infrequent ensure that different aspects of routines and routine
variations may be detectable (such as detecting when variations that support comparison between different people
parents are going to be late to pickup their children [12]). and populations are included in the models.
However, this requires a case-by-case approach to address
each kind of variation, which can be difficult to apply if all MODELING HUMAN ROUTINE BEHAVIOR
possible variations are not known a priori. In this section we present our approach to modeling human
routine behavior that applies an existing decision-theoretic
This leads to our second research question researchers want algorithm to the domain of routine modeling. To identify
to answer: RQVar: How can we expose variations of routine patterns (RQPat), we explicitly model the causal
routines and enable the discovery of the cause of those relationship between contexts in which different routines
behaviors? Similar to our first question, routine-modeling occur and the actions that people perform in those contexts.
approaches must identify aspects of context that influence Unlike models that extract only the most frequent routines,
how people behave in the face of variations. our approach also models possible variations from those
Routines Across Individuals and Populations routines (RQVar), even in infrequent contexts. Our
The third issue to consider is how to explore the differences approach does this by modeling probability distributions
in routines within and across individuals and populations. over different possible behaviors, which allows the
Different people often develop their own routines to deal researcher to make sense about which of those behaviors
with their individual contexts. For example, differences in form routines and which form variations. Our approach can
family daily routines can explain the responsibilities of model both individual and population routine behavior, and
different family members [11]. thus allows comparisons between those models (RQComp).

Also, to help people improve their routines it is important to Data Modeling


understand the difference between peoples’ desired routines Human behavior data is often collected using different
and their actual routines. For example, to design systems sensors and stored into behavior logs. In our approach we
that help people better organize their schedules to avoid first convert the behavior logs into sequences of events
being late to pickup their children requires understanding representing people’s current context and the actions they
differences between routines for days when they are late perform in that context. We then use those sequences of
and days when they are on time. events to model human routine behavior.

Such comparisons can be performed between routines of an Model of Human Routine Behavior
individual, but also between routines of different people or We model demonstrated routine behavior using a Markov
populations. For example, to create interventions that help Decision Processes (MDP) framework [32]. MDP is
aggressive drivers improve their driving style requires an particularly well suited for modeling human routine
understanding of the differences between aggressive and behavior because it explicitly models the user’s context, the
non-aggressive driving routines [18]. Based on how close actions that can be performed in that context, and the
drivers are to aggressive or non-aggressive routines, they preferences people have for different actions in different
can be classified into aggressive and non-aggressive groups. contexts. A Markov decision process is a tuple:
Researchers can then identify routine behaviors that need to
𝑀𝑀𝐷𝑃 = 𝑆, 𝐴, 𝑃 𝑠′|𝑠, 𝑎 , 𝑅 𝑠, 𝑎 (1)
change to make aggressive drivers less aggressive overall.
It consists of a set of states 𝑆 (𝑠 ∈ 𝑆) representing context, leaving the current location. As is common for IRL
and actions 𝐴 (𝑎 ∈ 𝐴) that a person can take. In addition, algorithms [29, 45], we assume a parametric reward
the model includes an action-dependent probability
function that is linear in ➚𝑆,Æ, given unknown weight
distribution for each state transition 𝑃 𝑠′|𝑠, 𝑎 , which
parameters 𝜃:
specifies the probability of the next state 𝑠′ when the person
performs action 𝑎 in state 𝑠. This state transition probability 𝑅 𝑠, 𝑎 = 𝜃𝑇 · ➚𝑠t,𝑎t 2
distribution 𝑃 𝑠′|𝑠, 𝑎 models how the environment
We begin the process of recovering the expected state
responds to the actions that people perform in different
states. When modeling human behavior, the transitions are frequencies (𝐷𝑠) and probability distribution of actions
often stochastic (each pair 𝑠, 𝑎 can transition to many given states (𝑃 𝑎|𝑠 ) by trying to learn the person’s reward
transition states 𝑠′ with different probabilities). However, if functions 𝑅 𝑠, 𝑎 from demonstrated behavior. This
the person has full control over the environment, they can problem reduces to matching the model feature function
also be deterministic (i.e., for each pair 𝑠, 𝑎 there is expectations (𝐸P(𝑆,Æ) ➚(𝑆, 𝐴) ) with demonstrated feature
exactly one transition state 𝑠′ with probability 1). Finally, expectations (𝐸∼ ) ➚(𝑆, 𝐴) ) [1]. To match the expected
𝑃(
there is a reward function 𝑅 𝑠, 𝑎 → that the person counts of different features, we use MaxCausalEnt IRL
incurs when performing action 𝑎 in state 𝑠, which [45], which learns the parameters of the MDP model to
represents the utility that people get from performing match the actual behavior of the person. Unlike other
different actions in different contexts. approaches described earlier, MaxCausalEnt explicitly
models the causal relationships between context and
People’s behavior is then defined by sequences of actions actions, and keeps track of the probability distribution of
they perform as they go from state to state until reaching different actions that people can perform in those contexts.
some goal state. In an MDP framework, such behavior is
defined by a deterministic policy (𝜋: 𝑆 → 𝐴), which To compute the unknown parameters 𝜃, MaxCausalEnt
specifies actions people take in different states. considers the causal relationships between all the different
Traditionally, the MDP is “solved” using algorithms, such features of the states and the actions. The Markovian
as value iteration [6], to find an optimal policy (with the property of MDP, which assumes that the actions a person
highest expected cumulative reward). However, our goal is performs only depend on the information encoded by the
to find the expected frequencies of different states and the previous state, makes computing the causal relationships
between the states and actions computationally feasible.
probability distribution of actions given states 𝑃 𝑎|𝑠
MaxCausalEnt extends the Principle of Maximum Entropy
instead—information necessary to identify people’s [19] to cases where information about probability
routines and variations. distribution is sequentially revealed, as is the case with
Learning Routine Patterns from Demonstrated Behavior behavior logs. This principle ensures that the estimated
In this section we explore how the MaxCausalEnt algorithm probability distribution of actions given states 𝑃 𝑎|𝑠 is
[46], an algorithm typically used to predict human behavior the one that best fits the state and action combinations from
[43, 44], can be applied in a novel way to extract routine the sequences in the behavior logs.
behavior patterns from observed data. MaxCausalEnt
algorithm makes its predictions by computing a policy MaxCausalEnt IRL maximizes the causal entropy
(𝜋: 𝑆 → 𝐴) that best predicts the action people take in (𝐻(𝐴𝑇 " 𝑆𝑇)) of the probability distribution of actions
given states 𝑃 𝐴 |𝑆 :
different states. Our main contribution is our insight that in 𝑡 𝑡

the process of computing this policy, MaxCausalEnt argmax 𝐻 𝐴𝑇 " 𝑆𝑇 (3)


𝑃 Æt 𝑆t
algorithm computes two other functions that express how
likely it is that a state and action are part of a routine: 1) the such that:
expected frequency of states (𝐷𝑠), and 2) probability 𝐸P(𝑆,𝐴) ➚(𝑆, 𝐴) = 𝐸P( ) ➚(𝑆, 𝐴)
distribution of actions given states 𝑃 𝑎|𝑠 . We now ✯St,At 𝑃(𝐴𝑡|𝑆𝑡) ≥ 0
describe how we compute these two functions and how they
relate to routines. ✯S ,A 𝑃 𝐴𝑡 𝑆𝑡 = 1
t t
Inverse Reinforcement Learning (IRL) [29] approaches,
The first constraint in the above equation ensures that the
which MaxCausalEnt is based on, assume that people
feature counts calculated using the estimated probability
assign a utility function (modeled as the reward functions
distribution of actions given states (𝑃 𝐴𝑡|𝑆𝑡 ) matches the
𝑅 𝑠, 𝑎 ), which they use to decide which action to perform
observed counts of features in the data, and the other two
in different demonstrated states. Each state and action
combination in our MDP model is expressed by a feature ensure that 𝑃 𝐴𝑡|𝑆𝑡 is an actual probability distribution.
vector ➚𝑆,Æ. For example, in an MDP that models daily Using the action-based cost-to-go (𝑄), which represents the
commute routines, states can have features that describe all expected value of performing action 𝑎𝑡 in state 𝑠𝑡, and state-
possible locations that a person can be at, and actions can based value (𝑉) notation, which represents the
have features that describe if the person is staying at or
expected value of being in state 𝑠𝑡, the procedure for MDP The data in our two example data sets consists of sequences
MaxCausalEnt IRL reduces to [45]: of sensor readings, which we convert into sequences of
𝑠of𝑡 𝑠of𝑡
𝑄 (𝑎𝑡, 𝑠𝑡) = 𝑃 𝑠𝑡+1 𝑠𝑡, 𝑎𝑡 · 𝑉 𝑠𝑡+1 (4) events represented by state action pairs. Parsing the raw
𝜃 𝜃 data we are able to extract: 1) a set of states 𝑆 defined by a
𝑠of𝑡 𝑠t+1 𝑠of𝑡 set of features ➚ which represent context, 2) a set of
𝑉𝜃 (𝑠𝑡) = softmax 𝑄 𝑎𝑡, 𝑠𝑡 + 𝜃𝑇 · ➚𝑠t,𝑎t 𝑠t
𝑎t � actions 𝐴 defined by a list of binary features ➚𝑎t which
Note that this is similar, but not the same as stochastic value represent activities that the people can perform, and 3)
iteration [6], which would model optimal and not observed empirically estimated state-action transition dynamics
behavior. The probability distribution of actions given the (𝑃 𝑠′|𝑠, 𝑎 ). At any discrete event step, the state features
states is then given by: contain values of all the contextual sensor readings at that
𝑃 𝑎 |𝑠 𝑠𝑜ft 𝑠𝑜ft
event, and actions contain feature values describing the
= eQ 𝑎t,𝑠t −𝑉 (𝑠t)
(5)
𝑡 𝑡 𝜃 𝜃 activity the people performed at that event. We then
The probability distribution of actions given states 𝑃 𝑎|𝑠 estimate state-action transition dynamics based on the
frequencies of state transitions in the state-action event
and the state transition probability distribution 𝑃 𝑠′|𝑠, 𝑎
sequences and estimate the expected state frequency counts
are used in a forward pass to calculate the expected state
(𝐷𝑠) and the state-action probability distributions (𝑃 𝑎|𝑠 )
frequencies (𝐷𝑠). This optimization problem can then be
as described in the previous section.
solved using a gradient ascent algorithm. Ziebart [46]
provides proofs of these claims and detailed pseudocode for Family Daily Routines Data Set
the algorithm above. Situations when one of the parents is unable to pickup or
drop-off a child create stress for both parents and children
Extracting Models of Routine Behavior
[11]. To better understand the circumstances under which
We illustrate our routine modeling approach on two
these situations arise, it is important to identify when the
previously collected data sets from the literature that
parents are responsible for picking up and dropping off
contain logs of demonstrated human behavior. The first
their children (RQPat), when variations from normal
data set contains daily commute routines of all family
routines occur and how parents handle those situations
members from three two-parent families with children from
(RQVar). This requires finding and understanding how the
a mid-sized city in North America [11]. The data set was
parents organize their daily routines around those pickups
used to predict the times the parents are likely to forget to
and drop-offs (RQComp).
pickup their children [12]. The other data set contains
driving routine behavior of aggressive and non-aggressive This data contains location sampling (latitude and
drivers as they drive on their daily routes [18]. The data set longitude) at one-minute intervals for every family member
was used to classify aggressive and non-aggressive drivers. (including children) in three families from a mid-sized city
in North America [11]. Location information was manually
We picked these two data sets to show the generalizability
labeled based on information from bi-weekly interviews
of our approach to different types of routines. The two data
with participants. Participants also provided information
sets contain routine tasks people perform on a daily basis,
about their actual daily routines during those interviews.
but that are very different in nature. The family daily
routine data set incorporates the traditional spatio-temporal We converted the location logs into sequences of states and
aspect of routines most of the existing work focuses on. The actions representing each individual’s daily commute for
driving data set contains situational routines that are driven each day in the data set. State features included the day of
by other types of context (e.g., the surrounding traffic, the the week, hour of the day, participant’s current place, and
current position of the car in the intersection). whether the participant stayed at the location from the
previous hour, arrived at the location during the current
The two data sets also differ in granularity of the tasks. The
hour, or left the location during the hour (Table 1). Action
commute routines happen over a longer period of time and
features included the participant’s current activity that
the granularity of the task is very coarse with few actions
could be performed in those states (Table 2). Participants
that people can perform in different contexts (e.g., stay at
could stay for another hour, leave the location, and once
the current place or leave and go to another place). The
they have left a location go to another location. The data
daily routines are therefore defined by the states the people
contained a total of 149 days.
are in. The aggressive driving data set contains fine-grained
actions, which often occur in parallel, that people perform We modeled the state transition probabilities (𝑃 𝑠′|𝑠, 𝑎 ) as
to control the vehicle (e.g., control the gas and brake pedals a stochastic MDP to model the environment’s influence on
and the steering wheel). Driving routines are therefore arrival time to a destination. The participants could stay or
primarily defined by the drivers’ actions in different driving leave a place with 100% probability. Once the participants
situations. The driving data set also showcases the ability of leave their current location, their arrival time at their
our approach to capture population models (e.g., aggressive destination depends on their desired arrival time and the
drivers vs. non-aggressive drivers) and enable comparison environment (e.g., traffic, travel distance). To model the
of routines across different populations.
driving data collected in the study included: car location
Table 1. State features capturing the different contexts of a
daily commute. traces (latitude and longitude), speed, acceleration, engine
RPM, throttle position, and steering wheel rotation. Sensor
Feature Description
Day Day of week {M, T, W, Th, F, Sa, Su} data was recorded every 500 milliseconds.
Time Time of day in increments of 1 hour {0-23}
Location Current location We use a subset of this data focused on intersections (where
Activity Activity in the past hour instances of aggressive driving are likely to occur [36]). We
{STAYED AT, ARRIVED AT, TRAVELING FROM}
used location traces of the participants’ driving routines to
manually label intersections and the position of the vehicle
Table 2. Action features representing actions that people can in those intersections. One of the limitations of this data set
perform when at a location. is that there is no information about other vehicles and
Feature Description traffic signs and signals that represent the environment. We
Activity Activity people can perform in current context then split the intersection instances into sequences of sensor
{STAY AT, TRAVEL TO}
Location The current location to stay at or next location to go to readings that start 2 seconds before the car enters the
intersection, and end 2 seconds after the car exits the
influence of the external variables, we empirically estimate intersection. This resulted in a total of 49,690 intersections
the probability that participants have arrived at another from a total of 542 hours of driving data from 1,017 trips.
place within an hour or not. The median number of states
and actions per family were 14,113 and 85 respectively, for To model states we combined the driver’s goals (e.g., make
all combinations of possible features. a right turn), the environment (e.g., position in intersection),
and the current state of the vehicle (e.g., current speed) into
Aggressive Driving Behavior Data Set features of the states (Table 3). Actions in our model
Drivers that routinely engage in aggressive driving behavior represent how the driver operates the vehicle by steering the
present a hazard to other people in traffic [3]. To wheel, and depressing the gas (throttle) and brake pedals.
understand aggressive driving routines, it is important to We aggregate the driver’s actions between different stages
explore the types of contexts aggressive drivers are likely to of the intersection and represent the median throttle and
prefer (e.g., turn types, car speed, acceleration) and the braking level, and note any spikes in both throttle and
driving actions they apply in those contexts (e.g., throttle braking. We consider the movement of the steering wheel
and braking level, turning) (RQPat). Aggressive drivers to estimate whether the driver turned in one smooth action,
might also be prone to dangerous driving behavior that does or if the turn required one or more adjustments. Table 4
not occur frequently (e.g., rushing to clear intersections shows action features in our model. We identified 7,272
during rush hour [36]). Such behavior might manifest itself different states and 446 different actions in the data set.
as variations from established routines (RQVar).
It is also important to compare the routines of aggressive
drivers with non-aggressive drivers to understand how Table 3. State features capturing the different contexts the
aggressive drivers can improve their routine (RQComp). To driver can be in.
understand those differences, it is not enough to compare Feature Description
Goals
the contexts both groups of drivers find themselves in, but Maneuver The type of maneuver at the intersection
also the actions that drivers perform in those contexts. This {STRAIGHT, RIGHT TURN, LEFT TURN, U-TURN}
Environment
is because both aggressive and non-aggressive drivers can
Position Current position of the car in the intersection
attain similar driving contexts, but the quality of the {APPROACHING, ENTERING, EXITING, AFTER}
execution of driving actions may differ. For example, both Rush hour Whether the trip is during rush hour or not
{TRUE, FALSE}
types of drivers might stop at a stop sign on time, but Vehicle
aggressive drivers might have to brake harder or make more Speed Current speed of the vehicle (5-bin discretized)
Throttle Current throttle position (5-bin discretized)
other unsafe maneuvers than non-aggressive drivers. Acceleration Current positive/negative acceleration (9-bin discretized)
Wheel Position Current steering wheel position
This data set contains driving data from 22 licensed drivers {STRAIGHT, TURNING, RETURNING}
(11 male and 11 female; ages between 21 and 34) from a Turn Current turn vehicle is involved in
{STRAIGHT, SMOOTH, ADJUSTED}
mid-sized city in North America [18]. Participants were
asked to drive their own cars on their usual daily driving Table 4. Action features representing actions that drivers can
routes over a period of 3 weeks. Their cars were perform between stages of the intersection.
instrumented with a sensing platform consisting of an
Android-based smartphone, On-board Diagnostic tool
(OBD2), and an inertial measurement unit (IMU) mounted
to the steering wheel of the car. Ground truth about
participants’ driving styles (aggressive vs. non-aggressive)
was established using their self-reported driving violations
and responses to the driver behavior questionnaire [18]. The
Feature Description
Pedal Median throttle (gas and brake pedal) position
(10-bin discretized)
Throttle Spike Sudden increases in throttle
{NONE, SUDDEN, INTERMITTENT}
Brake Spike Sudden braking
{NONE, SUDDEN, INTERMITTENT}
Turn style Type of turn driver performed in intersection
{STRAIGHT, SMOOTH, ADJUSTED}
VALIDATING THE MODELS OF ROUTINE BEHAVIOR Validating the Quality of the Extracted Routine
In this section, we evaluate the quality of the routines We now show that the routine patterns extracted using our
extracted using our model from the two data sets. First, we approach match the actual routines of people. To do this,
show that the routine actions we extract are predictive of we recruited researchers that work with machine learning
the majority of behaviors in the data; i.e., that the algorithm and data mining in the domain of human behavior, and
is sufficiently predictive for modeling routines. Accuracy of asked them to identify the routines and variations extracted
this prediction task also quantifies the variability of the using our approach. We then confirmed that those patterns
routines in the model, where high accuracy suggests low matched the ground truth behaviors established in the
variability. It also shows that the extracted routines previous work [11, 18]. This allowed us to verify that the
generalize to contexts and actions that we have not patterns extracted using our approach are meaningful and
observed during model training. Second, we show that the represent the actual routines.
routines extracted using our approach are meaningful. We
show that the patterns extracted using our approach Study Software
correspond to the actual routines and routine variations in To make the routine behavior models created using our
our two example behavior logs (RQPat & RQVar). We also approach accessible to participants and allow them to
show that those routines and variations show the real investigate the extracted routine patterns, we developed a
differences between modeled populations (RQComp). simple visualization tool. To maintain a level of familiarity,
we base our visual encoding of routine behavior elements
Quantifying Routineness of Human Behavior on a traditional visual representation of an MDP as a graph
Although we are not interested in the predictive power of (Figure 1). Our MDP graph contains nodes representing
the MaxCausalEnt IRL per se, we use the task of predicting states (as circles) and actions (as squares), directed edges
the next action given a state to evaluate our model’s ability from state nodes to action nodes (indicating possible actions
to extract routine. Using 10-fold cross validation for each people can perform in those states), and directed edges from
person in each dataset, we compare the performance of this actions to states (indicating state transitions for any given
algorithm for extracting routine behavior with a simple state and action combination).
Zero-R algorithm, which always predicts the overall most
frequent action, and a first-order Markov Model algorithm, To enable participants to see changes in features of states
which always predicts the most frequent action for each and actions, we encode state features and action features as
state. We chose these two baselines because they explicitly a series of color-coded circular marks arranged in a spiral
establish the frequency of actions in the training set. shape within the nodes. Each feature has a dedicated hue.
Matching or exceeding these baselines means that the Feature values that are present in the node are represented
algorithm has correctly identified frequent routine actions by a dark shade, and feature values not present in a light
and that the predictive power of the algorithm is sufficiently shade of that color. A dark boundary serves as a separator
high to model routines. between features. More details, in text are always available
simply by moving the cursor over a node (Figure 1.C).
The mean accuracy of the MaxCausalEnt on the family
daily routines dataset was 0.81 (SD=0.09), compared to To show frequent behaviors in the model, we visually
first-order Markov Model mean accuracy of 0.66 represent the probability of different graph elements using
(SD=0.07) and ZeroR mean accuracy of 0.51 (SD=0.09). line thickness. Thickness of the outside line of the state and
MaxCausalEnt algorithm likely outperformed the first-order action nodes encodes the frequency of that state in a
Markov Model because of its ability to better generalize behavior sequence (𝐷𝑠), where thicker lines indicate states
from training data. The accuracy of MaxCausalEnt that are likely to be part of a routine. Similarly, the
algorithm also suggests low variability of routines in thickness of the edges encodes the probability of that edge.
people’s daily schedules. Thickness of edges from states to actions is given by the
probability distribution of actions given states (𝑃 𝑎|𝑠 ), and
The mean accuracy of the MaxCausalEnt on individual represents the influence of each state on the choice of
models of driving routines was 0.54 (SD=0.05) compared to actions. The thickness of edges from actions to states is
first-order Markov Model mean accuracy of 0.58 given by the probability of transition (𝑃 𝑠′|𝑠, 𝑎 ).
(SD=0.06) and ZeroR mean accuracy of 0.33 (SD=0.06).
MaxCausalEnt algorithm and the first-order Markov Model To layout the nodes, we sort the initial states from the
had similar accuracies likely because in each fold the demonstrated sequences by their frequency (𝐷𝑠) in
training set was representative of the testing set. However, descending order. We then use a version of the depth-first
decision-theoretic guarantees of MaxCausalEnt that ensure search algorithm, starting from the initial state nodes, that
it makes the least number of assumptions to fit the observed traverses nodes by first iterating over edges in order from
data make it less likely to overfit the training data in highest to lowest probabilities 𝑃 𝑎|𝑠 and 𝑃 𝑠′|𝑠, 𝑎 . State
general. Relatively low accuracy of both MaxCausalEnt and nodes are never duplicated (i.e., there is exactly one node in
the first-order Markov Model on this data set suggests that the layout for each state in the model), whereas action
there is a lot of variability in the driving routines. nodes are duplicated for each state.
Figure 1. Study software user interface showing the main routine and one likely variation of non-aggressive drivers extracted using
our approach: A) overview panel, B) the main display area containing subgraphs representing automatically extracted routine
sequences of states (circles) and actions (squares), B.1) the user is hovering over a node to highlight extracted routine (nodes
highlighted in purple and dark gray edges), B.2) an action that starts a variation from the main routine B.3) aggregate items
representing possible extracted variations, and C) details panel showing information about visual elements on demand.

When the participant selects a data set and population, the


To determine whether or not to pin the node, the researcher
tool provides the initial layout of routines extracted using
can review the features of individual and aggregate nodes
our algorithm. This shows the most important information
by hovering over them. In addition to showing the details of
about the extracted routines. However, to further analyze
individual nodes in the details panel (Figure 1.C), hovering
the routine behavior, the participant must be able to explore
over nodes shows relationships between different elements
the details of routine variations filtered out as aggregate
of the routine. Hovering over any node highlights the most
nodes (Figure 1.B.3). For example, the participant might
likely routine path from an initial state to the collecting
want to find which of the parents’ routines include locations
node that contains the hovered over node (Figure 1.B.1).
where they pickup and drop off their children.
This makes it easier to understand the routine states and
Aggregated items contain valuable information about actions in the area of interest.
potential routine variations. For example, a child might go Participants
to her grandparents or their friend’s house on Wednesdays To verify the routines, we have recruited 8 researchers (5
after school; two variations on the same routine that occur male and 3 female) that have had experience with machine
with similar probability. To show all possible variations in learning and data mining, or have worked in the domain of
an aggregate, the participant can click on the aggregate to activity recognition and human routine modeling. The
expand its content. To mark an aggregated node as a participants included Ph.D. students, Postdoctoral fellows,
variation of interest, the researcher can pin that aggregated Research Scientists, and Professors working or visiting at
node by clicking on it, thus removing it from all of its our local University. All participants had experience with
aggregate parents. When the researcher holds Alt-key and machine learning, 2 with data mining, 4 with activity
clicks on an aggregated node, this pins all the nodes on the recognition, and 1 worked specifically on modeling human
most likely sequence of states and actions, determined by routine behavior. The participants were compensated $25
the probabilities of edges between the two. The pinned for taking part in the study.
sequence, starting from the clicked node to the sequence
end node, represents a routine variation. Pinned nodes are Method
identified by a gray glow effect. All nodes that are part of When participants arrived at our lab, we briefed them on
the extracted routine are automatically pinned, and all other the purpose of the study and they signed a consent form.
nodes are unpinned. Clicking on a pinned node unpins it, They then filled out a short questionnaire asking them about
which returns the node into the aggregate. their occupation and experience with relevant research
topics. We then demonstrated the visual tool to the
participants and allowed them to practice using it for
approximately 20 minutes. Participants had to complete two routines because the events were part of infrequent routine
tasks: 1) identify daily routine for a randomly chosen variations that the people in the original study [11] did not
person and weekday from the daily routines data set, and 2) report in the ground truth. The algorithm correctly assigned
identify the differences between routines of non-aggressive low probabilities to those actions, but the participants did
and aggressive drivers from the driving data set. The first not notice this. This is likely an issue with the visualization
task took approximately 20 minutes, and the second task rather than the algorithm itself, and is something that can be
about an hour. Total study time was approximately between addressed with more training with the tool.
one and a half and two hours.
In addition to simply pointing to patterns that represented
For the two tasks, we asked participants to identify routines correct routines, participants also generated some insights
and differences between the routines without presenting for themselves. For example, all six participants that were
them with the ground truth. We did this to avoid biasing the presented with a parent’s daily routine that contained a
participants towards trying to match the routines presented child pickup or drop-off specifically pointed out this
in the tool with what we might have told them is the correct activity. Also, three participants, that had a case where the
answer. We then compared their answers with the ground parent drops off the child as part of his or her routine, but
truth from the previous literature to verify that the tool does not also pick the child up, correctly explained that the
extracted the right and meaningful routine. Because the other parent was likely responsible for the pickup, without
main purpose of the study was to validate the extracted seeing the other parent’s routines.
routines and not evaluate the usability of the tool,
participants could ask clarifying questions about the tool In the driving data set, participants pointed to the patterns
and the user interface at any point during the tasks. that form the main routines (RQPat) and variations in
driving behavior (RQVar) of both aggressive and non-
Ground Truth aggressive drivers. All participants pointed to patterns that
In the first task, we used the family daily commute routines show that aggressive drivers are more likely to drive faster
model to understand the patterns of pickups and drop-offs. through intersections than non-aggressive drivers. Five
We compare the findings of our participants with the participants showed the patterns of routine variations where
ground truth and discussion provided by Davidoff et al. aggressive drivers are likely to increase their throttle just
[11]. In this data set, the ground truth represents self- before entering and leaving intersections. Participants
reported daily commute routines for all family members in pointed those out as the main differences between the two
all families that took part in the study. Family members populations (RQCom). Two participants also pointed to the
reported their location and the time they usually arrive and probabilities of routine variation patterns extracted using
leave that location. Davidoff et al. [11] then manually our approach that suggest that aggressive drivers are less
annotated and confirmed the routines in the raw sensor data. consistent in their behavior than non-aggressive drivers.
In the second task, we used the driving routines of Participants likely drew their conclusions from the model,
aggressive and non-aggressive drivers that Hong et al. [18] but might also have a preconceived notion that acceleration
identified in their data set. Hong et al. [18] used their and speed are correlated with aggressive driving. However,
intuition and expert knowledge of driving behaviors to even if our participants had preconceived notions, they
separately compare the distributions of each sensor stream could verify and document them using our model.
in the raw data to gain insight about aggressive driving
Although evaluation of the visualization tool was not our
styles. They found that aggressive drivers drive faster than
main goal, 2 participants mentioned that such a tool would
non-aggressive drivers, and that they experience higher
help them explore and understand human routines. One
acceleration than non-aggressive driver (i.e., they are more
participant, who studies routine behaviors, pointed out that
likely to press hard on the gas and brake pedals).
the organization of routine patterns and variations helped
Additionally, they found more variability in the behavior of
him clarify his understanding of what constitutes routines
aggressive drivers than non-aggressive drivers.
and how they manifest themselves in people’s activity.
Results
DISCUSSION
Our results show that our approach extracts meaningful Through our evaluation, we showed that our models trained
patterns of routine behavior. The participants were able to using MaxCausalEnt algorithm [45] can extract patterns of
point out the patterns that form the high-level routines routine behavior from demonstrated behavior logs. This is a
present in the ground truth for both tasks (RQPat). For the novel application of an algorithm that was designed to
daily routines task, this means that they successfully listed predict human behavior. In our evaluation of the algorithm
the locations and times of the routines of people in the daily we found that its ability to predict routines from the two
routine data set. However, two participants identified two example data sets was sufficient for modeling routines. The
separate patterns where the locations and times reported as ability of MaxCausalEnt algorithm to generalize from small
part of the main daily routine did not correspond to the sample sizes enabled it to beat the baseline in the daily
ground truth. After careful examination, we found that the routine data set. The performance of the algorithm was
participants wrongfully identified the actions as part of the comparable with the first-order Markov Model in the
aggressive driving data set. This is likely because the We showed that routine models extracted using our
training data happened to match the testing data well. approach can help researchers to identify routines and
However, this is not safe to assume in general case, and routine variations without having to manually search for
MaxCausalEnt’s decision-theoretic guarantee that it will not those patterns in raw data. This is a step towards models
overfit the observed data make it a better choice for that enable researchers to form scientific insight and
modeling routines than the first-order Markov Model. generate knowledge about human behavior that will help
Although not our main contribution, our visualization tool them develop new theories about human routines. Having
also helped us validate the ability of our approach to extract such knowledge can also inform the design of novel
meaningful routines. The participants were able to explore technologies that help people improve the quality of their
context, actions, and the relationships between the two, to routine behavior.
correctly identify the patterns of routines (RQPat) and their Our long-term goal is to help researchers in the process of
variations (RQVar) reported in the previous work. They generating knowledge that will support the design of
pointed to these relationships to establish the differences technologies for routine behavior change. In this work, we
between routines of the two driver populations (RQComp). presented a simple visual representation of the model
Although we carefully designed our tool, our goal was not extracted using our approach. However, in future work, we
to formally evaluate its usability. We did not notice any intend to develop a visual analytics [21] tool to offer a
usability issues that prevented participants from learning framework for visual data mining and understanding of
the elements of the model. We found that the participants human routine behaviors. We also intend to show how
knew how to progress towards understanding the routines. researchers and designers can use the insights they generate
Our results imply that researchers can use the patterns from our models to design technologies that help people
extracted using our approach to more quickly identify major improve their routines.
aspects of routines by visually inspecting them, even after Machine understanding of human routine behaviors is also
only short amount of training, compared to previous work. necessary to design and implement effective technologies
For example, Davidoff et al. [11] performed tedious manual that help people improve their routines. Our approach
labeling of routine and routine variation patterns in the raw encodes patterns of routine behavior in a way that allows
data based on feedback from the participants before systems, such as smart agents, to classify, predict, and
presenting the patterns on a timeline. Hong et al. [18] used reason about human actions under the inherent uncertainty
their intuition and expert knowledge of driving behaviors to present in human behavior. For example, future smart
separately compare the distributions of each sensor stream agents can use our model to detect aggressive driving
in the raw data to gain insight about aggressive driving behavior and provide feedback to the driver based on the
styles. Our participants had to only explore the patterns models of non-aggressive drivers. Such technologies can
extracted using our approach. have a positive effect on society by making people
The knowledge that the researchers gain about routine healthier, safer, and more efficient in their routine tasks.
behaviors through exploring our models can inform the ACKNOWLEDGMENTS
design of interventions that help people improve their This work was funded by NSERC (PGSD3-438429-2013)
routines. For example, the knowledge that aggressive and NSF (CCF-1029549, IIS-1217929). The authors would
drivers are likely to use higher throttles can inform the like to thank Brian Ziebart for his valuable input regarding
design of in-car systems that monitor the throttle and make the MaxCasualEnt algorithm, Scott Davidoff and Jin-Hyuk
the driver more aware of this aggressive behavior through Hong for their help in obtaining and understanding the two
subtle ambient notifications. Another advantage of our datasets used in our work, and Julian Ramos and Christine
approach is the underlying MDP-based model, which can Bauer for discussions about human routine behavior.
be used to power smart agents that automatically classify
current behaviors and prescribe new actions that improve REFERENCES
existing routines. 1. Pieter Abbeel and Andrew Y. Ng. 2004.
Apprenticeship learning via inverse reinforcement
CONCLUSION AND FUTURE WORK learning. In Proceedings of the twenty-first
We presented a novel approach for modeling human routine international conference on Machine learning (ICML
behavior from behavior logs that explicitly models one of '04). ACM, New York, NY, USA, 1-.
the most important aspects of routines: the causal http://doi.acm.org/10.1145/1015330.1015430
relationship between the contexts and actions the people
2. Wolfgang Aigner, Silvia Miksch, Bettina Thurnher,
perform in those contexts. We demonstrated that our
and Stefan Biffl. 2005. PlanningLines: Novel Glyphs
approach can be used to extract meaningful routine patterns
for Representing Temporal Uncertainties and Their
from two different types of human behaviors. However,
Evaluation. In Proceedings of the Ninth International
future work should explore how our approach can be used
Conference on Information Visualisation (IV '05).
to model routines in other domains (e.g., health,
IEEE Computer Society, Washington, DC, USA, 457-
accessibility, software user interfaces).
463. http://dx.doi.org/10.1109/IV.2005.97
3. American Automobile Association. 2009. Aggressive York, NY, USA, 1175-1184.
driving: Research update. American Automobile http://doi.acm.org/10.1145/1978942.1979119
Association Foundation for Traffic Safety.
13. Anind K. Dey. 2001. Understanding and Using
4. John R. Anderson, Daniel Bothell, Michael D. Byrne, Context. Personal Ubiquitous Comput. 5, 1 (January
Scott Douglass, Christian Lebiere, and Yulin Qin. 2001), 4-7. http://dx.doi.org/10.1007/s007790170019
2004. An integrated theory of the mind. Psychological
review, 111(4), 1036-1060. 14. Nathan Eagle and Alex S. Pentland. 2009.
http://dx.doi.org/10.1037/0033-295X.111.4.1036 Eigenbehaviors: identifying structure in routine. Behav.
Ecol. Sociobiol., vol. 63, no. 7, 1057-1066.
5. Mitra Baratchi, Nirvana Meratnia, Paul J. M. Havinga, http://dx.doi.org/10.1007/s00265-009-0739-0
Andrew K. Skidmore, and Bert A. K. G. Toxopeus.
2014. A hierarchical hidden semi-Markov model for 15. Katayoun Farrahi and Daniel Gatica-Perez. 2012.
modeling mobility data. In Proceedings of the 2014 Extracting Mobile Behavioral Patterns with the Distant
ACM International Joint Conference on Pervasive and N-Gram Topic Model. In Proceedings of the 2012 16th
Ubiquitous Computing (UbiComp '14). ACM, New Annual International Symposium on Wearable
York, NY, USA, 401-412. Computers (ISWC) (ISWC '12). IEEE Computer
http://doi.acm.org/10.1145/2632048.2636068 Society, Washington, DC, USA, 1-8.
http://dx.doi.org/10.1109/ISWC.2012.20
6. Richard Bellman. 1957. A Markovian decision process.
Journal of Mathematics and Mechanics, 6, 679–684. 16. Martha S. Feldman and Brian T. Pentland. 2003.
Reconceptualizing organizational routines as a source
7. Oliver Brdiczka, Norman Makoto Su, and James Bo of flexibility and change. Administrative Science
Begole. 2010. Temporal task footprinting: identifying Quarterly, 48 (1), 94-118.
routine tasks by their temporal patterns. In Proceedings http://dx.doi.org/10.2307/3556620
of the 15th international conference on Intelligent user
interfaces (IUI '10). ACM, New York, NY, USA, 281- 17. Geoffrey M. Hodgson. 1997. The ubiquity of habits
284. http://doi.acm.org/10.1145/1719970.1720011 and rules. Cambridge Journal of Economics, 21(6),
663-684.
8. Andreas Bulling, Ulf Blanke, and Bernt Schiele. 2014.
A tutorial on human activity recognition using body- 18. Jin-Hyuk Hong, Ben Margines, and Anind K. Dey.
worn inertial sensors. ACM Comput. Surv. 46, 3, 2014. A smartphone-based sensing platform to model
Article 33 (January 2014), 33 pages. aggressive driving behaviors. In Proceedings of the
http://doi.acm.org/10.1145/2499621 SIGCHI Conference on Human Factors in Computing
Systems (CHI '14). ACM, New York, NY, USA, 4047-
9. Paolo Buono, Aleks Aris, Catherine Plaisant, Amir 4056. http://doi.acm.org/10.1145/2556288.2557321
Khella, and Ben Shneiderman. 2005. Interactive pattern
search in time series. In Electronic Imaging 2005, 175- 19. Edwin T. Jaynes. 1957. Information theory and
186. http://dx.doi.org/10.1117/12.587537 statistical mechanics. Physical review, 106(4), 620.
http://dx.doi.org/10.1103/PhysRev.106.620
10. Christopher Collins, Sheelagh Carpendale, and Gerald
Penn. 2007. Visualization of uncertainty in lattices to 20. Jin Jing and Pedro Szekely. 2010. Interactive querying
support decision-making. In Proceedings of the 9th of temporal data using a comic strip metaphor. In 2010
Joint Eurographics / IEEE VGTC conference on IEEE Symposium on Visual Analytics Science and
Visualization (EUROVIS'07), Ken Museth, Torsten Technology (VAST), 163-170.
Möller, and Anders Ynnerman (Eds.). Eurographics http://dx.doi.org/10.1109/VAST.2010.5652890
Association, Aire-la-Ville, Switzerland, Switzerland, 21. Daniel A. Keim, Florian Mansmann, Jorn
51-58. Schneidewind, and Hartmut Ziegler. 2006. Challenges
http://dx.doi.org/10.2312/VisSym/EuroVis07/051-058 in Visual Data Analysis. In Proceedings of the
11. Scott Davidoff, John Zimmerman, and Anind K. Dey. conference on Information Visualization (IV '06). IEEE
2010. How routine learners can support family Computer Society, Washington, DC, USA, 9-16.
coordination. In Proceedings of the SIGCHI http://dx.doi.org/10.1109/IV.2006.31
Conference on Human Factors in Computing 22. John Krumm and Eric Horvitz. 2006. Predestination:
Systems (CHI '10). ACM, New York, NY, USA, 2461- inferring destinations from partial trajectories.
2470. http://doi.acm.org/10.1145/1753326.1753699 In Proceedings of the 8th international conference on
12. Scott Davidoff, Brian D. Ziebart, John Zimmerman, Ubiquitous Computing (UbiComp'06), Paul Dourish
and Anind K. Dey. 2011. Learning patterns of pick-ups and Adrian Friday (Eds.). Springer-Verlag, Berlin,
and drop-offs to support busy family coordination. Heidelberg, 243-260.
In Proceedings of the SIGCHI Conference on Human http://dx.doi.org/10.1007/11853565_15
Factors in Computing Systems (CHI '11). ACM, New 23. Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-
based model of personal informatics systems.
In Proceedings of the SIGCHI Conference on Human Conference on Coordinated and Multiple Views in
Factors in Computing Systems (CHI '10). ACM, New Exploratory Visualization (CMV '07). IEEE Computer
York, NY, USA, 557-566. Society, Washington, DC, USA, 61-71.
http://doi.acm.org/10.1145/1753326.1753409 http://dx.doi.org/10.1109/CMV.2007.20
24. Nan Li, William Cushing, Subbarao Kambhampati, and 34. David L. Ronis, J. Frank Yates, and John P. Kirscht,
Sungwook Yoon. 2014. Learning Probabilistic 1989. Attitudes, decisions, and habits as determinants
Hierarchical Task Networks as Probabilistic Context- of repeated behavior. A.R. Pratkanis, S.J. Breckler, &
Free Grammars to Capture User Preferences. ACM A.G. Greenwald, Eds., Attitude structure and function,
Trans. Intell. Syst. Technol. 5, 2, Article 29 (April Lawrence Erlbaum: NJ, 213-239.
2014), 32 pages. http://doi.acm.org/10.1145/2589481
35. Adam Sadilek and John Krumm. 2012. Far Out:
25. Brian Y. Lim and Anind K. Dey. 2011. Investigating Predicting Long-Term Human Mobility.
intelligibility for uncertain context-aware applications. In Proceedings of the Twenty-Sixth AAAI Conference
In Proceedings of the 13th international conference on on Artificial Intelligence (AAAI ’12), 814-820.
Ubiquitous computing(UbiComp '11). ACM, New
York, NY, USA, 415-424. 36. David Shinar and Richard Compton. 2004. Aggressive
http://doi.acm.org/10.1145/2030112.2030168 driving: an observational study of driver, vehicle, and
situational variables, Accident Analysis & Prevention,
26. Magnus S. Magnusson. 2000. Discovering hidden time Volume 36, Issue 3, May 2004, 429-437.
patterns in behavior: T-patterns and their http://dx.doi.org/10.1016/S0001-4575(03)00037-X
detection. Behavior Research Methods, Instruments, &
Computers, 32(1), 93-110. 37. Richard Taylor. 1950. Purposeful and non-purposeful
http://dx.doi.org/10.3758/BF03200792 behavior: A rejoinder. Philosophy of Science, 17, 4.
The University of Chicago Press on behalf of the
27. Neal Martin. 2008. Habit: the 95% of behavior Philosophy of Science Association, 327-332.
marketers ignore. First edition. FT Press: NJ.
38. Martin Wattenberg. 2002. Arc diagrams: Visualizing
28. Megan Monroe, Rongjian Lan, Hanseung Lee, structure in strings. In Proceedings of IEEE Symposium
Catherine Plaisant, and Ben Shneiderman. 2013. on Information Visualization (INFOVIS 2002). IEEE,
Temporal Event Sequence Simplification. IEEE 110-116.
Transactions on Visualization and Computer http://dx.doi.org/10.1109/INFVIS.2002.1173155
Graphics 19, 12 (December 2013), 2227-2236.
http://dx.doi.org/10.1109/TVCG.2013.200 39. Marc Weber, Marc Alexa, and Wolfgang Müller. 2001.
Visualizing time-series on spirals. In Proceedings of
29. Andrew Y. Ng and Stuart J. Russell. 2000. Algorithms IEEE Symposium on Information Visualization
for Inverse Reinforcement Learning. InProceedings of (INFOVIS 2001). IEEE, 7-14.
the Seventeenth International Conference on Machine http://doi.ieeecomputersociety.org/10.1109/INFVIS.20
Learning (ICML '00), Pat Langley (Ed.). Morgan 01.963273
Kaufmann Publishers Inc., San Francisco, CA, USA,
663-670. 40. Krist Wongsuphasawat, John Alexis Guerra Gómez,
Catherine Plaisant, Taowei David Wang, Meirav
30. Wanda J. Orlikowski and JoAnne Yates. 2002. It's Taieb-Maimon, and Ben Shneiderman. 2011.
About Time: Temporal Structuring in LifeFlow: visualizing an overview of event sequences.
Organizations. Organization Science 13, 6 (November In Proceedings of the SIGCHI Conference on Human
2002), 684-700. Factors in Computing Systems (CHI '11). ACM, New
http://dx.doi.org/10.1287/orsc.13.6.684.501 York, NY, USA, 1747-1756.
31. Catherine Plaisant, Brett Milash, Anne Rose, Seth http://doi.acm.org/10.1145/1978942.1979196
Widoff, and Ben Shneiderman. 1996. LifeLines: 41. Jian Zhao, Fanny Chevalier, and Ravin Balakrishnan.
visualizing personal histories. In Proceedings of the 2011. KronoMiner: using multi-foci navigation for the
SIGCHI Conference on Human Factors in Computing visual exploration of time-series data. In Proceedings
Systems (CHI '96), Michael J. Tauber (Ed.). ACM, of the SIGCHI Conference on Human Factors in
New York, NY, USA, 221-227. Computing Systems (CHI '11). ACM, New York, NY,
http://doi.acm.org/10.1145/238386.238493 USA, 1737-1746.
32. Martin L. Puterman. 2014. Markov decision processes: http://doi.acm.org/10.1145/1978942.1979195
discrete stochastic dynamic programming. John Wiley 42. Jian Zhao, Fanny Chevalier, Emmanuel Pietriga, and
& Sons, Inc., New York, NY, USA. Ravin Balakrishnan. 2011. Exploratory analysis of
33. Jonathan C. Roberts. 2007. State of the Art: time-series with chronolenses. In IEEE Transactions
Coordinated & Multiple Views in Exploratory on Visualization and Computer Graphics, 17, 12.
Visualization. In Proceedings of the Fifth International IEEE, 2422-2431.
http://dx.doi.org/10.1109/TVCG.2011.195
43. Brian D. Ziebart, Andrew L. Maas, Anind K. Dey, and Intelligent Robots and Systems (IROS 2009), 3931-
J. Andrew Bagnell. 2008. Navigate like a cabbie: 3936. http://dx.doi.org/10.1109/IROS.2009.5354147
probabilistic reasoning from observed context-aware
behavior. In Proceedings of the 10th international 45. Brian D. Ziebart, J. Andrew Bagnell, and Anind K.
conference on Ubiquitous computing (UbiComp '08). Dey. 2010. Modeling interaction via the principle of
ACM, New York, NY, USA, 322-331. maximum causal entropy. In Proceedings of the 27th
http://doi.acm.org/10.1145/1409635.1409678 international conference on Machine learning (ICML
'10). ACM, New York, NY, USA., 1247-1254.
44. Brian D. Ziebart, Nathan Ratliff, Garratt Gallagher,
Christoph Mertz, Kevin Peterson, J. Andrew Bagnell, 46. Brian D. Ziebart. 2010. Modeling purposeful adaptive
Martial Hebert, Anind K. Dey, and Siddhartha behavior with the principle of maximum causal
Srinivasa. 2009. Planning-based prediction for entropy. (Doctoral dissertation, Carnegie Mellon
pedestrians. In IEEE/RSJ International Conference on University).

You might also like