Poppy Crum

Poppy Crum

San Francisco, California, United States
14K followers 500+ connections

About

• Dedicated to developing immersive technologies that leverage human physiology and…

Articles by Poppy

Activity

Join now to see all activity

Experience

  • Giant Step Capital Graphic
  • -

  • -

  • -

  • -

  • -

    San Francisco Bay Area

  • -

    Geneva, Switzerland

  • -

    Mountain View, California, United States

  • -

  • -

    San Francisco

  • -

  • -

  • -

  • -

  • -

  • -

    San Francisco Bay Area

  • -

  • -

Education

Patents

  • 1 - Augmented hearing system

    US 11523245

    Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental…

    Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental element location data, headset coordinate locations of at least one person and at least the first environmental element in a headset coordinate system corresponding with the orientation of the headset may be determined. An apparatus may be caused to provide spatialization indications of the headset coordinate locations. Providing the spatialization indications may involve controlling a speaker system to provide environmental element sonification corresponding with at least the first environmental element location data.

  • Audio encoding/decoding with transform parameters

    US20220366919A1

    Encoding/decoding techniques where multiple transform parameter sets are encoded together with a rendered playback presentation of an input audio content. The multiple transform parameters are used on the decoder side to transform the playback presentation to provide a personalized binaural playback presentation optimized for an individual listener with respect to their hearing profile. This may be achieved by selection or combination of the data present in the metadata streams.

  • Audio processing apparatus and audio processing method

    US 9558744

    An audio processing apparatus and an audio processing method are described. In one embodiment, the audio processing apparatus include an audio masker separator for separating from a first audio signal an audio material comprising a sound other than stationary noise and utterance meaningful in semantics, as an audio masker candidate. The apparatus also includes a first context analyzer for obtaining statistics regarding contextual information of detected audio masker candidates, and a masker…

    An audio processing apparatus and an audio processing method are described. In one embodiment, the audio processing apparatus include an audio masker separator for separating from a first audio signal an audio material comprising a sound other than stationary noise and utterance meaningful in semantics, as an audio masker candidate. The apparatus also includes a first context analyzer for obtaining statistics regarding contextual information of detected audio masker candidates, and a masker library builder for building a masker library or updating an existing masker library by adding, based on the statistics, at least one audio masker candidate as an audio masker into the masker library, wherein audio maskers in the maker library are used to be inserted into a target position in a second audio signal to conceal defects in the second audio signal.

  • Augmented hearing system

    US 10419869

    Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental…

    Some implementations may involve receiving, via an interface system, personnel location data indicating a location of at least one person and receiving, from an orientation system, headset orientation data corresponding with the orientation of a headset. First environmental element location data, indicating a location of at least a first environmental element, may be determined. Based at least in part on the headset orientation data, the personnel location data and the first environmental element location data, headset coordinate locations of at least one person and at least the first environmental element in a headset coordinate system corresponding with the orientation of the headset may be determined. An apparatus may be caused to provide spatialization indications of the headset coordinate locations. Providing the spatialization indications may involve controlling a speaker system to provide environmental element sonification corresponding with at least the first environmental element location data.

  • Creative intent scalability via physiological monitoring

    US 11477525

    Creative intent input describing emotion expectations and narrative information relating to media content is received. Expected physiologically observable states relating to the media content are generated based on the creative intent input. An audiovisual content signal with the media content and media metadata comprising the physiologically observable states is provided to a playback apparatus. The audiovisual content signal causes the playback device to use physiological monitoring signals…

    Creative intent input describing emotion expectations and narrative information relating to media content is received. Expected physiologically observable states relating to the media content are generated based on the creative intent input. An audiovisual content signal with the media content and media metadata comprising the physiologically observable states is provided to a playback apparatus. The audiovisual content signal causes the playback device to use physiological monitoring signals to determine, with respect to a viewer, assessed physiologically observable states relating to the media content and generate, based on the expected physiologically observable states and the assessed physiologically observable states, modified media content to be rendered to the viewer.

  • Electrooculogram measurement and eye-tracking

    20230071021

    A system for determining a direction of gaze of a user, comprising a set of electrodes arranged on earpieces, each electrode comprising a patch of compressible and electrically conducting foam material. The system further includes circuitry connected to the electrodes and configured to receive a set of voltage signals from a set of electrodes arranged on an audio endpoint worn by a user, multiplex said voltage signals into an input signal, remove a predicted central voltage from said input…

    A system for determining a direction of gaze of a user, comprising a set of electrodes arranged on earpieces, each electrode comprising a patch of compressible and electrically conducting foam material. The system further includes circuitry connected to the electrodes and configured to receive a set of voltage signals from a set of electrodes arranged on an audio endpoint worn by a user, multiplex said voltage signals into an input signal, remove a predicted central voltage from said input signal, to provide a detrended signal, and determine said gaze direction based on said detrended signal. Such conducting foam materials provide satisfactory bio-sensing performance for a wide range of compression levels and over time. In the case of on-ear headphones, the foam electrodes may be integrated in the cuffs with little or no effect on the comfort level.

  • Hybrid near/far-field speaker virtualization

    EP 20829722.6

  • Method, systems and apparatus for hybrid near/far virtualization for enhanced consumer surround sound

    20220345845

    Embodiments are disclosed for hybrid near/far-field speaker virtualization. In an embodiment, a method comprises: receiving a source signal including channel-based audio or audio objects; generating near-field gain(s) and far-field gain(s) based on the source signal and a blending mode; generating a far-field signal based, at least in part, on the source signal and the far-field gain(s); rendering, using a speaker virtualizer, the far-field signal for playback of far-field acoustic audio…

    Embodiments are disclosed for hybrid near/far-field speaker virtualization. In an embodiment, a method comprises: receiving a source signal including channel-based audio or audio objects; generating near-field gain(s) and far-field gain(s) based on the source signal and a blending mode; generating a far-field signal based, at least in part, on the source signal and the far-field gain(s); rendering, using a speaker virtualizer, the far-field signal for playback of far-field acoustic audio through far-field speakers into an audio reproduction environment; generating a near-field signal based at least in part on the source signal and the near-field gain(s); prior to providing the far-field signal to the far-field speakers, sending the near-field signal to a near-field playback device or an intermediate device coupled to the near-field playback device; providing the far-field signal to the far-field speakers; and providing the near-field signal to the near-field speakers to synchronously overlay the far-field acoustic audio.

  • Perception based multimedia processing

    US 10339959

    Example embodiments disclosed herein relate to perception based multimedia processing. There is provided a method for processing multimedia data, the method includes automatically determining user perception on a segment of the multimedia data based on a plurality of clusters, the plurality of clusters obtained in association with predefined user perceptions and processing the segment of the multimedia data at least in part based on determined user perception on the segment. Corresponding…

    Example embodiments disclosed herein relate to perception based multimedia processing. There is provided a method for processing multimedia data, the method includes automatically determining user perception on a segment of the multimedia data based on a plurality of clusters, the plurality of clusters obtained in association with predefined user perceptions and processing the segment of the multimedia data at least in part based on determined user perception on the segment. Corresponding system and computer program products are disclosed as well.

  • Perception based multimedia processing

    US 10748555

    Example embodiments disclosed herein relate to perception based multimedia processing. There is provided a method for processing multimedia data, the method includes automatically determining user perception on a segment of the multimedia data based on a plurality of clusters, the plurality of clusters obtained in association with predefined user perceptions and processing the segment of the multimedia data at least in part based on determined user perception on the segment. Corresponding…

    Example embodiments disclosed herein relate to perception based multimedia processing. There is provided a method for processing multimedia data, the method includes automatically determining user perception on a segment of the multimedia data based on a plurality of clusters, the plurality of clusters obtained in association with predefined user perceptions and processing the segment of the multimedia data at least in part based on determined user perception on the segment. Corresponding system and computer program products are disclosed as well.

  • Personalized HRTFs via optical capture

    US 11778403

    An apparatus and method of generating personalized HRTFs. The system is prepared by calculating a model for HRTFs described as the relationship between a finite example set of input data, namely anthropometric measures and demographic information for a set of individuals, and a corresponding set of output data, namely HRTFs numerically simulated using a high-resolution database of 3D scans of the same set of individuals. At the time of use, the system queries the user for their demographic…

    An apparatus and method of generating personalized HRTFs. The system is prepared by calculating a model for HRTFs described as the relationship between a finite example set of input data, namely anthropometric measures and demographic information for a set of individuals, and a corresponding set of output data, namely HRTFs numerically simulated using a high-resolution database of 3D scans of the same set of individuals. At the time of use, the system queries the user for their demographic information, and then from a series of images of the user, the system detects and measures various anthropometric characteristics. The system then applies the prepared model to the anthropometric and demographic data as part of generating a personalized HRTF. In this manner, the personalized HRTF can be generated with more convenience than by performing a high-resolution scan or an acoustic measurement of the user, and with less computational complexity than by numerically simulating their HRTF.

  • Screen interaction via electroculogram measurements

    WO2023004063A1

    A method comprising acquiring a set of voltage signals from a set of electrodes arranged in proximity to the ears of a user, based on the set of voltage signals, determining an EOG gaze vector in ego-centric coordinates, determining a head pose of the user in display coordinates, using a sensor device worn by the user, combining the EOG gaze vector and head pose to obtain a gaze vector in display coordinates, and determining a gaze point by calculating an intersection of the gaze vector and an…

    A method comprising acquiring a set of voltage signals from a set of electrodes arranged in proximity to the ears of a user, based on the set of voltage signals, determining an EOG gaze vector in ego-centric coordinates, determining a head pose of the user in display coordinates, using a sensor device worn by the user, combining the EOG gaze vector and head pose to obtain a gaze vector in display coordinates, and determining a gaze point by calculating an intersection of the gaze vector and an imaging surface having a known position in display coordinates.

  • Speaker diarization supporting episodical content

    WO2022232284A1

    Embodiments are disclosed for speaker diarization supporting episodical content. In an embodiment, a method comprises: receiving media data including one or more utterances; dividing the media data into a plurality of blocks; identifying segments of each block of the plurality of blocks associated with a single speaker; extracting embeddings for the identified segments in accordance with a machine learning model, wherein extracting embeddings for identified segments further comprises…

    Embodiments are disclosed for speaker diarization supporting episodical content. In an embodiment, a method comprises: receiving media data including one or more utterances; dividing the media data into a plurality of blocks; identifying segments of each block of the plurality of blocks associated with a single speaker; extracting embeddings for the identified segments in accordance with a machine learning model, wherein extracting embeddings for identified segments further comprises statistically combining extracted embeddings for identified segments that correspond to a respective continuous utterance associated with a single speaker; clustering the embeddings for the identified segments into clusters; and assigning a speaker label to each of the embeddings for the identified segments in accordance with a result of the clustering. In some embodiments, a voiceprint is used to identify a speaker and the speaker identity for a speaker label.

Honors & Awards

  • Hedy Lemarr Award for Technology Innovation

    Digital Entertainment Group

  • TV Technology’s 20 Leaders to Watch List

    TV Technology

  • Distinguished Technology Leadership Award

    Advanced Imaging Society

    Recognized as a female leader making significant contributions to advancing the entertainment technology business. http://variety.com/2018/biz/news/the-advanced-imaging-society-2018-technology-awards-1202652774/

  • Technology and Standards Achievement Award

    Consumer Technology Association

    For work towards the introduction of affordable, over-the-counter hearing-aid/enhancement devices

  • Power Women List

    Billbard Magazine

    Recognized in Billboard Magazine's list of most influential female executives

  • Fellow

    Audio Engineering Society

    For work in neuroscience and psychoacoustics, allowing a better understanding of how we listen to audio.

Organizations

  • International Telecommunications Union

    Chair, Artificial Intelligence and Augmented Systems - ITU-R WP 6C

    - Present
  • International Telecommunications Union

    Vice-Chair Study Group 6 - Program Production and Quality

    - Present
  • Consumer Technology Association

    Chair CTA-2078 Inclusive Model Simulations of Biometric Conditions

    -
  • Stanford Research Institute Technical Council

    Member

    -
  • Consumer Technology Association

    Co-Chair Ansi/CTA-2051 Standard - Personal Sound Performance Criteria

    -
  • DARPA

    Member - Defense Science Research Council

More activity by Poppy

View Poppy’s full profile

  • See who you know in common
  • Get introduced
  • Contact Poppy directly
Join to view full profile

Other similar profiles

Explore collaborative articles

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

Explore More

Others named Poppy Crum