Keith Oh

Keith Oh

Singapore
2K followers 500+ connections

About

I'm curious about how technology changes the way we work, play and engage with one…

Activity

Join now to see all activity

Experience

  • Carousell Graphic

    Carousell

    Singapore

  • -

  • -

    Singapore

  • -

    Singapore

  • -

    Singapore

  • -

    Singapore

  • -

    Singapore

  • -

    United States

Education

Volunteer Experience

Publications

  • Risk assessment and horizontal scanning scenario builder: developing a systematic approach to assess risk of importation of avian influenza (H5N1)

    Ministry of Health, Singapore

    The risk of importation of human H5N1 infection from an infected area is determined by a few key decision makers based on their expert knowledge on the disease characteristics and the prevailing global situation. This may not be ideal as it overly relies on the subjective knowledge of the various experts. The risk assessment and horizontal scanning (RAHS) scenario builder seeks to translate this decision-making process into a systematic approach of assessing importation risk by identifying risk…

    The risk of importation of human H5N1 infection from an infected area is determined by a few key decision makers based on their expert knowledge on the disease characteristics and the prevailing global situation. This may not be ideal as it overly relies on the subjective knowledge of the various experts. The risk assessment and horizontal scanning (RAHS) scenario builder seeks to translate this decision-making process into a systematic approach of assessing importation risk by identifying risk factors that will have an impact on the importation of human H5N1 infections.

    Other authors
    • Yew Jern Low
    See publication
  • Predicting visual focus of attention from intention in remote collaborative tasks

    IEEE

    While shared visual space plays a very important role in remote collaboration on physical tasks, it is challenging and expensive to track users' focus of attention (FOA) during these tasks. In this paper, we propose to identify a user's FOA from his/her intention based on task properties, people's actions in the workspace, and conversational content. We employ a conditional Markov model to characterize a subject's FOA. We demonstrate the feasibility of the proposed method using a collaborative…

    While shared visual space plays a very important role in remote collaboration on physical tasks, it is challenging and expensive to track users' focus of attention (FOA) during these tasks. In this paper, we propose to identify a user's FOA from his/her intention based on task properties, people's actions in the workspace, and conversational content. We employ a conditional Markov model to characterize a subject's FOA. We demonstrate the feasibility of the proposed method using a collaborative laboratory task in which one partner (the helper) instructs another (the worker) on how to assemble online puzzles. We model a helper's FOA using task properties, workers' actions, and conversational content. The accuracy of the model ranged from 65.40% for puzzles with easy-to-name pieces to 74.25% for puzzles with more difficult-to-name pieces. The proposed model can be used to predicate a user's FOA in a remote collaborative task without tracking the user's eye gaze.

    Other authors
    • Jiazhi Ou
    • Susan R. Fussell
    • Tal Blum
    • Jie Yang
    See publication
  • An experiment in machine-augmented sensemaking in intelligence analysis

    ICCRTS

    Singapore has developed a prototype Risk Assessment and Horizon Scanning (RAHS) system in collaboration with The Arlington Institute and Cognitive Edge, with the aim to provide analysts with an extensible suite of tools for data exploration and data exploitation based on a service-oriented architecture. The RAHS system facilitates the extraction of open source information into repositories, which are then available to analysts for search and retrieval by means of various tools to augment the…

    Singapore has developed a prototype Risk Assessment and Horizon Scanning (RAHS) system in collaboration with The Arlington Institute and Cognitive Edge, with the aim to provide analysts with an extensible suite of tools for data exploration and data exploitation based on a service-oriented architecture. The RAHS system facilitates the extraction of open source information into repositories, which are then available to analysts for search and retrieval by means of various tools to augment the human users' sensemaking process. This paper describes how the RAHS system may be used in the analysis of a massive amount of data, drawing examples from open source material available on the Internet. In addition, this paper also presents a limited experiment in which analysts were presented with the task of exploring a set of documents related to a fictitious terrorist organization in order to identify the roles and responsibilities of the various people linked to the terrorist organization. A simple comparison regarding the workflow, efficiency, and effectiveness of a RAHS analyst is made to that of an analyst equipped with a traditional search engine. Finally, the lessons drawn from this experiment are presented that point to the possible way ahead for future versions of the RAHS system.

    Other authors
    • Gwenda Fong
    See publication
  • Sharing a single expert among multiple partners

    ACM

    Expertise to assist people on complex tasks is often in short supply. One solution to this problem is to design systems that allow remote experts to help multiple people in simultaneously. As a first step towards building such a system, we studied experts' attention and communication as they assisted two novices at the same time in a co-located setting. We compared simultaneous instruction when the novices are being instructed to do the same task or different tasks. Using machine learning, we…

    Expertise to assist people on complex tasks is often in short supply. One solution to this problem is to design systems that allow remote experts to help multiple people in simultaneously. As a first step towards building such a system, we studied experts' attention and communication as they assisted two novices at the same time in a co-located setting. We compared simultaneous instruction when the novices are being instructed to do the same task or different tasks. Using machine learning, we attempted to identify speech markers of upcoming attention shifts that could serve as input to a remote assistance system.

    Other authors
    See publication
  • Using linguistic features to measure presence in computer-mediated communication

    ACM

    We propose a method of measuring people's sense of presence in computer-mediated communication (CMC) systems) based on linguistic features of their dialogues. We create variations in presence by asking participants to collaborate on physical tasks in four CMC conditions. We then correlate self-reported feelings of presence with the use of specific linguistic features. Regression analyses show that 30% of the variance in self-reported presence can be accounted for by a small number of…

    We propose a method of measuring people's sense of presence in computer-mediated communication (CMC) systems) based on linguistic features of their dialogues. We create variations in presence by asking participants to collaborate on physical tasks in four CMC conditions. We then correlate self-reported feelings of presence with the use of specific linguistic features. Regression analyses show that 30% of the variance in self-reported presence can be accounted for by a small number of task-independent linguistic features. Even better prediction can be obtained when self-reported coordination is added to the regression equation. We conclude that linguistic measures of presence have value for studies of CMC.

    Other authors
    • Adam D. I. Kramer
    • Susan R. Fussell
    See publication
  • Analyzing and predicting focus of attention in remote collaborative tasks

    ACM

    To overcome the limitations of current technologies for remote collaboration, we propose a system that changes a video feed based on task properties, people's actions, and message properties. First, we examined how participants manage different visual resources in a laboratory experiment using a collaborative task in which one partner (the helper) instructs another (the worker) how to assemble online puzzles. We analyzed helpers' eye gaze as a function of the aforementioned parameters. Helpers…

    To overcome the limitations of current technologies for remote collaboration, we propose a system that changes a video feed based on task properties, people's actions, and message properties. First, we examined how participants manage different visual resources in a laboratory experiment using a collaborative task in which one partner (the helper) instructs another (the worker) how to assemble online puzzles. We analyzed helpers' eye gaze as a function of the aforementioned parameters. Helpers gazed at the set of alternative pieces more frequently when it was harder for workers to differentiate these pieces, and less frequently over repeated trials. The results further suggest that a helper's desired focus of attention can be predicted based on task properties, his/her partner's actions, and message properties. We propose a conditional Markov model classifier to explore the feasibility of predicting gaze based on these properties. The accuracy of the model ranged from 65.40% for puzzles with easy-to-name pieces to 74.25% for puzzles with more difficult to name pieces. The results suggest that we can use our model to automatically manipulate video feeds to show what helpers want to see when they want to see it.

    Other authors
    • Jiazhi Ou
    • Susan R. Fussell
    • Tal Blum
    • Jie Yang
    See publication
  • Effects of task properties, partner actions, and message content on eye gaze patterns in a collaborative task

    ACM

    Helpers providing guidance for collaborative physical tasks shift their gaze between the workspace, supply area, and instructions. Understanding when and why helpers gaze at each area is important both for a theoretical understanding of collaboration on physical tasks and for the design of automated video systems for remote collaboration. In a laboratory experiment using a collaborative puzzle task, we recorded helpers' gaze while manipulating task complexity and piece differentiability…

    Helpers providing guidance for collaborative physical tasks shift their gaze between the workspace, supply area, and instructions. Understanding when and why helpers gaze at each area is important both for a theoretical understanding of collaboration on physical tasks and for the design of automated video systems for remote collaboration. In a laboratory experiment using a collaborative puzzle task, we recorded helpers' gaze while manipulating task complexity and piece differentiability. Helpers gazed toward the pieces bay more frequently when pieces were difficult to differentiate and less frequently over repeated trials. Preliminary analyses of message content show that helpers tend to look at the pieces bay when describing the next piece and at the workspace when describing where it goes. The results are consistent with a grounding model of communication, in which helpers seek visual evidence of understanding unless they are confident that they have been understood. The results also suggest the feasibility of building automated video systems based on remote helpers' shifting visual requirements.

    Other authors
    • Jiazhi Ou
    • Jie Yang
    • Susan R. Fussell
    See publication
  • ShareComp: sharing for companionship

    ACM

    We describe our process, findings, and resulting solution for the CHI2005 Student Competition. The design problem was the issue of well-being of persons above the age of 65 years. Loss of a companion can be a primary cause of depression, and decline of social well-being and health for the elderly, so the challenge is to design for "artificial" companionship to support the elderly. We met this challenge by employing extensive research in geriatric psychology, empirical and analytical…

    We describe our process, findings, and resulting solution for the CHI2005 Student Competition. The design problem was the issue of well-being of persons above the age of 65 years. Loss of a companion can be a primary cause of depression, and decline of social well-being and health for the elderly, so the challenge is to design for "artificial" companionship to support the elderly. We met this challenge by employing extensive research in geriatric psychology, empirical and analytical human-computer interaction methods, interaction design techniques, and technologies currently available or in development. Our solution is a wearable and stationary device that will allow the user to 1) record events through pictures and audio for sharing, 2) gather artifacts of their past such as photographs into a multimedia slideshow format for sharing with others, and 3) allow friends and family to be aware of the user's location for safety purposes.

    Other authors
    See publication

Projects

  • Sensis

    We helped Sensis develop an internal innovation and development capability. In three phases over the course of a year, we worked to first develop concepts within our team, then co-creating with their internal teams, and then ultimately across multiple internal teams, in effect building an accelerator with the company.

    Other creators
    See project

Languages

  • English

    Native or bilingual proficiency

  • Chinese

    Native or bilingual proficiency

Recommendations received

More activity by Keith

View Keith’s full profile

  • See who you know in common
  • Get introduced
  • Contact Keith directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Keith Oh in Singapore

Add new skills with these courses