Michael Hornacek

Michael Hornacek

Wien, Wien, Österreich
704 Follower:innen 500+ Kontakte

Info

Senior expert in computer vision, remote sensing, and GIS

Berufserfahrung

  • Capturing Reality Grafik

    Capturing Reality

    Bratislava, Slovakia

  • -

    Vienna, Austria

  • -

    Vienna + Graz, Austria

  • -

    Vienna + Graz, Austria

  • -

    Vienna, Austria

  • -

    Vienna, Austria

  • -

    Montreal, Canada

Ausbildung

  • Technische Universität Wien Grafik

    Technische Universität Wien

    Funded by Microsoft Research Cambridge through its European Ph.D. scholarship programme

Veröffentlichungen

  • A Spatial AR System for Wide-area Axis-aligned Metric Augmentation of Planar Scenes

    CIRP Journal of Manufacturing Science and Technology

    Augmented reality (AR) promises to enable use cases in industrial settings that include the embedding of assembly instructions directly into the scene, potentially reducing or altogether obviating the need for workers to refer to such instructions in paper form or on a statically situated screen. Spatial AR, in turn, is a form of AR whereby the augmentation of the scene is carried out using a projector, with the advantage of rendering the augmentation visible to all onlookers simultaneously…

    Augmented reality (AR) promises to enable use cases in industrial settings that include the embedding of assembly instructions directly into the scene, potentially reducing or altogether obviating the need for workers to refer to such instructions in paper form or on a statically situated screen. Spatial AR, in turn, is a form of AR whereby the augmentation of the scene is carried out using a projector, with the advantage of rendering the augmentation visible to all onlookers simultaneously without calling for any to hold a handheld device such as a tablet or for each to wear some form of head-mounted display. In carrying out spatial AR, however, care must be taken to appropriately warp the image to be projected such that it, when projected, appear free of projective distortions to the viewer. For planar scene geometry (such as a floor, wall, or table), a manual process called keystone correction can be used to carry out an appropriate corrective image warp, a process that can be cumbersome. Another drawback of conventional spatial AR relying only on a single projector is that it is capable of augmenting only the portion of the scene within the projector’s static field of view, thereby hindering its applicability to use cases calling for augmentation of wide areas such as a factory floorspace.

    We propose a spatial AR system for wide-area metric augmentation of planar scene surfaces that produces the effect of keystone correction analytically as a function of the relative geometry of the projector and scene plane, using a projector equipped with a steerable mirror to direct the projection across varying target locations and a camera facing the scene plane in support of calibrating the system. Our system renders the placement of augmentations in the scene more intuitive than manual keystone correction in two ways. First, (i) the horizontal and vertical axes of the desired augmentations are set in accordance with the horizontal and vertical image axes of the [...]

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Generalized Sparse Convolutional Neural Networks for Semantic Segmentation of Point Clouds Derived from Tri-Stereo Satellite Imagery

    Remote Sensing

    We studied the applicability of point clouds derived from tri-stereo satellite imagery for semantic segmentation for generalized sparse convolutional neural networks by the example of an Austrian study area. We examined, in particular, if the distorted geometric information, in addition to color, influences the performance of segmenting clutter, roads, buildings, trees, and vehicles. In this regard, we trained a fully convolutional neural network that uses generalized sparse convolution one…

    We studied the applicability of point clouds derived from tri-stereo satellite imagery for semantic segmentation for generalized sparse convolutional neural networks by the example of an Austrian study area. We examined, in particular, if the distorted geometric information, in addition to color, influences the performance of segmenting clutter, roads, buildings, trees, and vehicles. In this regard, we trained a fully convolutional neural network that uses generalized sparse convolution one time solely on 3D geometric information (i.e., 3D point cloud derived by dense image matching), and twice on 3D geometric as well as color information. In the first experiment, we did not use class weights, whereas in the second we did. We compared the results with a fully convolutional neural network that was trained on a 2D orthophoto, and a decision tree that was once trained on hand-crafted 3D geometric features, and once trained on hand-crafted 3D geometric as well as color features.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Geospatial Analytics in the Large for Monitoring Depth of Cover for Buried Pipeline Infrastructure

    IEEE International Conference on Cloud Engineering (IC2E)

    Operators of pipeline infrastructure buried underground are in many countries required to ensure that depth of cover—a measure of the quantity of soil covering a pipeline—lie within prescribed bounds. Traditionally, monitoring depth of cover at scale has been carried out qualitatively by means of visual inspection. We proceed instead to rely on airborne remote sensing techniques to obtain densely sampled ground surface point measurements from the pipeline's right of way, from which we determine…

    Operators of pipeline infrastructure buried underground are in many countries required to ensure that depth of cover—a measure of the quantity of soil covering a pipeline—lie within prescribed bounds. Traditionally, monitoring depth of cover at scale has been carried out qualitatively by means of visual inspection. We proceed instead to rely on airborne remote sensing techniques to obtain densely sampled ground surface point measurements from the pipeline's right of way, from which we determine depth of cover using automated algorithms. Proceeding in our manner presents a reproducible, quantitative approach to monitoring depth of cover, yet the demands thus made by the scale of real-world pipeline monitoring scenarios on compute and storage resources can be substantial. We show that the scalability afforded by the cloud can be leveraged to address such scenarios, distributing the algorithms we employ to take advantage of multiple compute nodes and exploiting elastic storage. While the use case underlying this paper is monitoring depth of cover, our proposed architecture can be applied more broadly to a wide variety of geospatial analytics tasks carried out 'in the large', including change detection, semantic classification or segmentation, or computation of vegetation indices.

    Andere Autor:innen
  • Taking-off with UAVs: Just Hype or a Future Key Technology in Pipeline Integrity Management

    11th Pipeline Technology Conference (PTC)

    Unmanned aerial vehicles (UAVs) have been gaining immense attention in the last years for commercial applications as well as for private users. In this work we analyze the momentum of UAVs in the context of pipeline inspection. We briefly present general trends and existing regulations with respect to UAVs deemed to be relevant in the context of oil and gas transmission pipelines. Next we give an overview of potential UAV-based applications in pipeline integrity management including change…

    Unmanned aerial vehicles (UAVs) have been gaining immense attention in the last years for commercial applications as well as for private users. In this work we analyze the momentum of UAVs in the context of pipeline inspection. We briefly present general trends and existing regulations with respect to UAVs deemed to be relevant in the context of oil and gas transmission pipelines. Next we give an overview of potential UAV-based applications in pipeline integrity management including change detection, leak detection, and surface inspection tasks. We report on a technical solution developed at Siemens including our field experiences and results obtained with pilot customers.

    Andere Autor:innen
  • Highly Overparameterized Optical Flow Using PatchMatch Belief Propagation

    European Conference on Computer Vision (ECCV)

    Motion in the image plane is ultimately a function of 3D motion in space. We propose to compute optical flow using what is ostensibly an extreme overparameterization: depth, surface normal, and frame-to-frame 3D rigid body motion at every pixel, giving a total of 9 DoF. The advantages of such an overparameterization are twofold: first, geometrically meaningful reasoning can be called upon in the optimization, reflecting possible 3D motion in the underlying scene; second, the ‘fronto-parallel’…

    Motion in the image plane is ultimately a function of 3D motion in space. We propose to compute optical flow using what is ostensibly an extreme overparameterization: depth, surface normal, and frame-to-frame 3D rigid body motion at every pixel, giving a total of 9 DoF. The advantages of such an overparameterization are twofold: first, geometrically meaningful reasoning can be called upon in the optimization, reflecting possible 3D motion in the underlying scene; second, the ‘fronto-parallel’ assumption implicit in the use of traditional matching pixel windows is ameliorated because the parameterization determines a plane-induced homography at every pixel. We show that optimization over this high-dimensional, continuous state space can be carried out using an adaptation of the recently introduced PatchMatch Belief Propagation (PMBP) energy minimization algorithm, and that the resulting flow fields compare favorably to the state of the art on a number of small- and large-displacement datasets.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • SphereFlow: 6 DoF Scene Flow from RGB-D Pairs

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    We take a new approach to computing dense scene flow between a pair of consecutive RGB-D frames. We exploit the availability of depth data by seeking correspondences with respect to patches specified not as the pixels inside square windows, but as the 3D points that are the inliers of spheres in world space. Our primary contribution is to show that by reasoning in terms of such patches under 6 DoF rigid body motions in 3D, we succeed in obtaining compelling results at displacements large and…

    We take a new approach to computing dense scene flow between a pair of consecutive RGB-D frames. We exploit the availability of depth data by seeking correspondences with respect to patches specified not as the pixels inside square windows, but as the 3D points that are the inliers of spheres in world space. Our primary contribution is to show that by reasoning in terms of such patches under 6 DoF rigid body motions in 3D, we succeed in obtaining compelling results at displacements large and small without relying on either of two simplifying assumptions that pervade much of the earlier literature: brightness constancy or local surface planarity. As a consequence of our approach, our output is a dense field of 3D rigid body motions, in contrast to the 3D translations that are the norm in scene flow. Reasoning in our manner additionally allows us to carry out occlusion handling using a 6 DoF consistency check for the flow computed in both directions and a patchwise silhouette check to help reason about alignments in occlusion areas, and to promote smoothness of the flow fields using an intuitive local rigidity prior. We carry out our optimization in two steps, obtaining a first correspondence field using an adaptation of PatchMatch, and subsequently using alpha-expansion to jointly handle occlusions and perform regularization. We show attractive flow results on challenging synthetic and real-world scenes that push the practical limits of the aforementioned assumptions.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Depth Super Resolution by Rigid Body Self-Similarity in 3D

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    We tackle the problem of jointly increasing the spatial resolution and apparent measurement accuracy of an input low-resolution, noisy, and perhaps heavily quantized depth map. In stark contrast to earlier work, we make no use of ancillary data like a color image at the target resolution, multiple aligned depth maps, or a database of high-resolution depth exemplars. Instead, we proceed by identifying and merging patch correspondences within the input depth map itself, exploiting patch wise…

    We tackle the problem of jointly increasing the spatial resolution and apparent measurement accuracy of an input low-resolution, noisy, and perhaps heavily quantized depth map. In stark contrast to earlier work, we make no use of ancillary data like a color image at the target resolution, multiple aligned depth maps, or a database of high-resolution depth exemplars. Instead, we proceed by identifying and merging patch correspondences within the input depth map itself, exploiting patch wise scene self-similarity across depth such as repetition of geometric primitives or object symmetry. While the notion of 'single-image' super resolution has successfully been applied in the context of color and intensity images, we are to our knowledge the first to present a tailored analogue for depth images. Rather than reason in terms of patches of 2D pixels as others have before us, our key contribution is to proceed by reasoning in terms of patches of 3D points, with matched patch pairs related by a respective 6 DoF rigid body motion in 3D. In support of obtaining a dense correspondence field in reasonable time, we introduce a new 3D variant of Patch Match. A third contribution is a simple, yet effective patch up scaling and merging technique, which predicts sharp object boundaries at the target resolution. We show that our results are highly competitive with those of alternative techniques leveraging even a color image at the target resolution or a database of high-resolution depth exemplars.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Potential for High Resolution Systematic Global Surface Soil Moisture Retrieval via Change Detection Using Sentinel-1

    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing (JSTARS)

    The forthcoming two-satellite GMES Sentinel-1 constellation is expected to render systematic surface soil moisture retrieval at 1 km resolution using C-band SAR data possible for the first time from space. Owing to the constellation's foreseen coverage over the Sentinel-1 Land Masses acquisition region-global approximately every six days, nearly daily over Europe and Canada depending on latitude-in the high spatial and radiometric resolution Interferometric Wide Swath (IW) mode, the Sentinel-1…

    The forthcoming two-satellite GMES Sentinel-1 constellation is expected to render systematic surface soil moisture retrieval at 1 km resolution using C-band SAR data possible for the first time from space. Owing to the constellation's foreseen coverage over the Sentinel-1 Land Masses acquisition region-global approximately every six days, nearly daily over Europe and Canada depending on latitude-in the high spatial and radiometric resolution Interferometric Wide Swath (IW) mode, the Sentinel-1 mission shows high potential for global monitoring of surface soil moisture by means of fully automatic retrieval techniques. This paper presents the potential for providing such a service systematically over Land Masses and in near real time using a change detection approach, concluding that such a service is-subject to the mission operating as foreseen-expected to be technically feasible. The work presented in this paper was carried out as a feasibility study within the framework of the ESA-funded GMES Sentinel-1 Soil Moisture Algorithm Development (S1-SMAD) project.

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Prospects of Sentinel-1 for Land Applications

    IEEE International Geoscience and Remote Sensing Symposium (IGARSS)

    The Sentinel-1 mission is a polar-orbiting satellite constellation for the continuation of C-band Synthetic Aperture Radar (SAR) applications. Contrary to its predecessor instruments onboard of ENVISAT and RADARSAT, the Sentinel-1 satellites will be operated following a predefined and fixed baseline acquisition scenario. This will significantly facilitate the development of fully automatic processing chains for the generation of higher-level geophysical products and their uptake in…

    The Sentinel-1 mission is a polar-orbiting satellite constellation for the continuation of C-band Synthetic Aperture Radar (SAR) applications. Contrary to its predecessor instruments onboard of ENVISAT and RADARSAT, the Sentinel-1 satellites will be operated following a predefined and fixed baseline acquisition scenario. This will significantly facilitate the development of fully automatic processing chains for the generation of higher-level geophysical products and their uptake in applications. This paper gives an overview of the potential use of Sentinel-1 for land applications, discussing different land cover products (permanent water bodies, forest/non-forest, rice) and parameters of high relevance for hydrological monitoring (soil moisture, snow and freeze/thaw status, surface inundation).

    Andere Autor:innen
    Veröffentlichung anzeigen
  • Extracting Vanishing Points Across Multiple Views

    IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

    The realization that we see lines known to be parallel in space as lines that appear to converge in a corresponding vanishing point has led to techniques employed by artists since at least the Renaissance to render a credible impression of perspective. More recently, it has also led to techniques for recovering information embedded in images pertaining to the geometry of their underlying scene. In this paper, we explore the extraction of vanishing points in the aim of facilitating the…

    The realization that we see lines known to be parallel in space as lines that appear to converge in a corresponding vanishing point has led to techniques employed by artists since at least the Renaissance to render a credible impression of perspective. More recently, it has also led to techniques for recovering information embedded in images pertaining to the geometry of their underlying scene. In this paper, we explore the extraction of vanishing points in the aim of facilitating the reconstruction of Manhattan-world scenes. In departure from most vanishing point extraction methods, ours extracts a constellation of vanishing points corresponding, respectively, to the scene's two or three dominant pairwise-orthogonal orientations by integrating information across multiple views rather than from a single image alone. What makes a multiple-view approach attractive is that in addition to increasing robustness to segments that do not correspond to any of the three dominant orientations, robustness is also increased with respect to inaccuracies in the extracted segments themselves.

    Andere Autor:innen
    Veröffentlichung anzeigen

Patente

Auszeichnungen/Preise

  • Ph.D. scholarship

    Microsoft Research Cambridge

Sprachen

  • English

    Muttersprache oder zweisprachig

  • German

    Fließend

  • Slovak

    Fließend

  • French

    Gute Kenntnisse

  • Polish

    Gute Kenntnisse

Michael Hornaceks vollständiges Profil ansehen

  • Herausfinden, welche gemeinsamen Kontakte Sie haben
  • Sich vorstellen lassen
  • Michael Hornacek direkt kontaktieren
Mitglied werden. um das vollständige Profil zu sehen

Weitere ähnliche Profile

Weitere Mitglieder, die Michael Hornacek heißen

Entwickeln Sie mit diesen Kursen neue Kenntnisse und Fähigkeiten