Image Processing for Space & Earth Observations
Multisource & Multispectral Image Processing via Bayesian Inference
MIV team ICUBE

CNRSUnistra

The SpaceFusion Project

The SpaceFusion project is funded by the French Research Agency (ANR) within the programme "Jeunes Chercheurs 2005" (grant JC05_47500); duration 40 months (Jan 06-Apr 09), budget 120 kEUR.

The goal is to perform multisource data fusion in astronomy and remote sensing. We use Bayesian inference to combine multiple images of the same area into a single, physically meaningful model. Errors are propagated from the sources to the estimated model.
  • Goals and Objectives
  • Achievements
  • Participants
  • Work Packages
  • Related Publications
  • Highlights – Recent Results
  • Documents
  • Goals and Objectives

    Model-based multiband image data fusion via Bayesian inference:
    application to astronomy and 3D reconstruction in remote sensing

    Keywords: Data fusion, Uncertainty, Error map, Bayesian inference, Sampling theory, DSM generation, Stereo vision, Disparity map, Camera calibration, Super-resolution, Pan-sharpening

    The wealth of data in Earth and space imaging and the number of spectral bands have been steadily increasing over the last years. Consequently, the information redundancy and dimensionality have reached a very high level. This is usually not, or very little, taken into account in image processing. The multiplicity of observations, their complexity and inhomogeneity make their interpretation particularly difficult. One of the first objectives of this project is to extract the useful information from these data to enhance their representation, thus allowing for an easier and more accurate analysis.

    Therefore, to address the redundancy problem, we propose to develop and apply new data fusion and reconstruction methods. The originality of the project lies in considering data fusion as the estimation of a single model, of arbitrary spatial and spectral resolutions. The model is to be inferred from a number of inhomogeneous observations, possibly from different sensors under various conditions. It is all about reconstructing a geometric and radiometric object that best relates to the observations and integrates all the useful information contained in the initial data.

    In astronomical imaging, we will aim at a sharp, correctly sampled, noise-free and possibly super-resolved image. In the Virtual Observatory framework for instance, one wishes to combine large numbers of multispectral images from various sources. In planetary imaging or remote sensing, both terrain topography and camera parameters must be taken into account to efficiently combine several images. Therefore, the topography will explicitly be included in the model. It will be decoupled from the multispectral reflectance that relates to the texture and color of the terrain. The object provided by the fusion-reconstruction method will be a 3D surface, possibly super-resolved regarding both geometry and reflectance.

    We will start by defining a multidimensional generative model, enabling us to describe the image formation from a single multiband model that can either be a 2D image or a 3D surface. The estimation of the model parameters and related uncertainties will be performed through hierarchical Bayesian inference. This is one of the innovations this project will bring. This will enable us to integrate the physics of the studied objects by including all available a priori knowledge. It will also involve observation models describing the data acquisition process (image formation and degradation). This approach will remain open since it will allow for model updating, in order to include new data into the model as soon as they become available.

    Achievements

    Summary (abstract of the final report)

    In space and astronomical imaging, the volume, redundancy, complexity and inhomogeneity of data make their interpretation increasingly difficult. Therefore, one of the first goals of this project was to help extract the useful information from these data masses, in order to allow for a more convenient, and also more accurate analysis. To address the redundancy problem, we developed and applied new data fusion and object reconstruction methods. The originality of the proposed methodology consists of considering data fusion as the estimation of a single model from multiple, inhomogeneous observations provided by multiple sensors and instruments, working within the probabilistic framework of Bayesian inference. It is all about reconstructing a single geometric and radiometric object that best corresponds to observations, while integrating the useful information from the initial data set.

    In astronomical imaging, we managed to obtain images that are sharp, well-sampled, with a reduced noise level, and super-resolved in some cases. Within the Virtual Observatory framework, we intended to fuse large numbers of multispectral images from different sources, and with different characteristics. Even if this (ambitious) objective was not achieved, the research that was conducted enabled us to optimally fuse several synthetic images from the same sensor in a single spectral band, within a supervised framework. The investigations concerning the full automation and the extension to multi- and hyperspectral imaging, as well as the processing of large volumes of data using a recursive approach, have been initiated within the Fusion work package of the DAHLIA project (funded by ANR), which is based on the results from the SpaceFusion project, and whose goal is to achieve the fusion of hyperspectral data cubes. In the long run, this project should enable astronomers to perform an automated fusion of the data provided by the MUSE instrument, which will be commissioned in 2012 on the Very Large Telescope, thus allowing for a significant increase in the instruments sensitivity and accuracy.

    In Earth or planetary imaging, the objective was the generation of a 3D surface and a multispectral reflectance map, possibly super-resolved, from a set of multi-source and multi-date satellite images. This objective was redefined because it was little realistic, and we concentrated our efforts on the along-track stereo setting to avoid useless complications. A new technique was developed that allows us to estimate the disparity map or deformation between two images automatically. Moreover, an original method for computing uncertainties (as the spatial distribution of errors and covariances) was developed, and a patent application is in progress. These uncertainties become elevation errors when the disparity is converted into digital surface model once the sensor orientation has been estimated. Error maps, though essential for a quantitative analysis of the topography, are currently not available, and the state of the art is almost nonexistent. Uncertainty validation is in progress, as well as the development of an automated sensor orientation technique, within the AutoProbaDSM project (funded by the FCT in Portugal). This project aims at the perfecting of the uncertainty computation, the optimal fusion of large data sets, and the full automation; apart from the new methodology it should help to provide the first high resolution probabilistic digital surface model of an entire country (continental Portugal in this case).

    Significant contributions

    • Automated, fast and robust algorithm for stereo disparity map estimation, using multigrid optimization, Loopy Belief Propagation, and efficient differential optimization.
    • Original uncertainty computation technique for stereo disparity maps based on Bayesian inference.
    • Efficient, approximate inversion of the precision matrix to compute the spatial distribution of variances and covariances.
    • Image fusion and super-resolution reconstruction technique for band-limited images using B-Splines, that also estimates uncertainties using Bayesian inference, applied in (but not limited to) astronomy.
    • Rigid camera calibration from ground control points, with error estimation, applied to aerial imaging.
    • New pan-sharpening algorithm that uses Bayesian inference, combining an image formation model with a spatially adaptive prior model, applied to remote sensing.
    • Basic research on various topics such as uncertainty computation and simplification, image formation modeling, resampling, radiometric change modeling, push-broom camera modeling and calibration.

    Project Participants

    Name Position Employer Affiliation % Research time
    André Jalobeanu [P.I.]Research ScientistCNRSCGE, Évora (Portugal)90
    Christophe ColletProfessorUniv. StrasbourgLSIIT, Illkirch40
    Fabien SalzensteinAssistant ProfessorUniv. StrasbourgLSIIT, Illkirch40
    Mireille LouysAssistant ProfessorUniv. StrasbourgLSIIT-CDS, Strasbourg40
    Franoise NerryResearch ScientistCNRSLSIIT, Illkirch10
    Albert BijaouiAstronomerCNAPOCA, Nice10
    Eric SlezakAstronomerCNAPOCA, Nice10

    SpaceFusion Work Packages

    PASEO Projects

    BayesCameraCalibrationJun 2007-Dec 2008
    Absolute Push-Broom camera calibration with uncertainties
    Keywords: DEM matching, affine camera calibration, Bayesian inference

    Group participant:
    DispMapInferenceJun 2006-Dec 2008
    Robust dense disparity map estimation with uncertainties
    Keywords: dense disparity map, deformation field, warping, B-Spline interpolation, radiometric changes, spatially adaptive

    Group participant:
    DeepSkyFusionDec 2005-Dec 2008
    Multisource data fusion and 2D super-resolution
    Keywords: data fusion, spatial and spectral super-resolution, model-based, recursive update, Bayesian inference

    Group participants: C. Collet, F. Salzenstein, M. Louys
    3DSpaceFusionDec 2005-Nov 2008
    Multisource data fusion, 3D surface recovery and super-resolution
    Keywords: 3D surface recovery, reflectance estimation, data fusion, super-resolution, model-based, recursive update, Bayesian inference

    Group participant: C. Collet
    ReflectanceFusionDec 2005-Nov 2008
    Multisource, multispectral data fusion and super-resolution
    Keywords: Data fusion, super-resolution, pan-sharpening, model-based, Bayesian inference

    Group participant: C. Collet
    3DShapeInferenceJan 2004-Sep 2006
    3D shape recovery via Bayesian inference
    Keywords: DEM reconstruction, surface recovery, Bayesian inference, marginalization, rendering

    Group participant:

    Project-related Publications

    Show link boxes

    Journal papers and Book Chapters – peer-reviewed

    • C. Collet, F. Flitti, S. Bricq, A. Jalobeanu: “Fusion and Multi-Modality” - in Inverse Problems in Vision and 3D Tomography (ISTE/Wiley), A. Mohammad-Djafari ed., John Wiley and Sons, Dec 2009
      @inbook{ref86,
        title = {Inverse Problems in Vision and 3D Tomography},
        chapter = {Fusion and Multi-Modality},
        author = {C. Collet and F. Flitti and S. Bricq and A. Jalobeanu},
        editor = {A. Mohammad-Djafari},
        publisher = {John Wiley and Sons},
        url = {http://www.iste.co.uk/index.php?f=x&ACTION=View&id=321},
        month = {Dec},
        year = {2009}
      }
    • M.V. Joshi, A. Jalobeanu: “MAP estimation for Multiresolution Fusion in Remotely Sensed Images using an IGMRF Prior Model” - IEEE Trans. on Geoscience and Remote Sensing (TGRS), 48(3), Jul 2009
      In this paper we propose a model based approach for the multiresolution fusion of satellite images. Given the high spatial resolution panchromatic (Pan) image and a low spatial and high spectral resolution multispectral (MS) image acquired over the same geographical area the problem is to generate a high spatial and high spectral resolution multispectral image. This is clearly an ill-posed problem and hence we need a proper regularization. We model each of the low spatial resolution MS images as the aliased and noisy versions of their corresponding high spatial resolution i.e., fused (to be estimated) MS images. A proper aliasing matrix is assumed to take care of the undersampling process. The high spatial resolution MS images to be estimated are then modeled as separate Inhomogeneous Gaussian Markov Random Fields (IGMRF) and a Maximum A Posteriori (MAP) estimation is used to obtain the fused image for each of the MS bands. The IGMRF parameters are estimated from the available high resolution Pan image and are used in the prior model for regularization purposes. Since the method does not directly operate on the Pan pixel values as most of the other methods do, the spectral distortion is minimum and the spatial properties are better preserved in the fused image as the IGMRF parameters are learned at every pixel. We demonstrate the effectiveness of our approach over some existing methods by conducting the experiments on synthetic data as well as on the images captured by the Quickbird satellite.
      @article{ref75,
        title = {MAP estimation for Multiresolution Fusion in Remotely Sensed Images using an IGMRF Prior Model},
        journal = {IEEE Trans. on Geoscience and Remote Sensing},
        author = {M.V. Joshi and A. Jalobeanu},
        volume = {48},
        number = {3},
        url = {http://dx.doi.org/10.1109/TGRS.2009.2030323},
        month = {Jul},
        year = {2009}
      }
    • A. Jalobeanu, J.A. Gutiérrez, E. Slezak: “Multisource data fusion and super-resolution from astronomical images” - Statistical Methodology (STAMET), Special issue on Astrostatistics, 5(4), Jul 2008
      Virtual Observatories give us access to huge amounts of image data that are often redundant. Our goal is to take advantage of this redundancy by combining images of the same field of view into a single object. To achieve this goal, we propose to develop a multi-source data fusion method that relies on probability and band-limited signal theory. The target object is an image to be inferred from a number of blurred and noisy sources, possibly from different sensors under various conditions (i.e. resolution, shift, orientation, blur, noise...). We aim at the recovery of a compound model "image+uncertainties" that best relates to the observations and contains a maximum of useful information from the initial data set. Thus, in some cases, spatial super-resolution may be required in order to preserve the information. We propose to use a Bayesian inference scheme to invert a forward model, which describes the image formation process for each observation, and takes into account some a priori knowledge (e.g. stars as point sources). This involves both automatic registration and resampling, which are ill-posed inverse problems that are addressed within a rigorous Bayesian framework. The originality of the work is in devising a new technique of multi-image data fusion that provides us with super-resolution, self-calibration and possibly model selection capabilities. This approach should outperform existing methods such as resample-and-add or drizzling since it can handle different instrument characteristics for each input image and compute uncertainty estimates as well. Moreover, it is designed to also work in a recursive way, so that the model can be updated when new data becomes available.
      @article{ref69,
        title = {Multisource data fusion and super-resolution from astronomical images},
        journal = {Statistical Methodology},
        author = {A. Jalobeanu and J.A. Gutiérrez and E. Slezak},
        volume = {5},
        number = {4},
        series = {Special issue on Astrostatistics},
        url = {http://dx.doi.org/10.1016/j.stamet.2008.02.002},
        month = {Jul},
        year = {2008}
      }

    Conference papers – peer-reviewed

    • A. Jalobeanu: “Predicting spatial uncertainties in stereo photogrammetry: achievements and intrinsic limitations” - 7th International Symposium on Spatial Data Quality (ISSDQ 2011), Coimbra, Portugal, Oct 2011
      We present a new probabilistic method for digital surface model generation from optical stereo pairs, with an expected ability to propagate errors from the data to the final result, providing spatial uncertainty estimates to be used for quantitative analyis in planetary or Earth sciences. Existing stereo-derived surfaces lack rigorous, quantitative error estimates, and we propose to address this issue by deriving a method of error prediction, rather than error assessment as usually done in the area through the use of reference data. We use only the information present in the available data and perform the prediction using Bayesian inference. We start by defining a forward model, using an adaptive radiometric change map to achieve robustness to noise and reflectance effects. A priori smoothness constraints are introduced to stabilize the solution.!Solving the inverse problem to recover a surface from noisy data involves fast deterministic optimization techniques.!Though the reconstruction results look satisfactory, we conclude that uncertainty estimates computed from two images only are unreliable, which is due to major limitations of stereo, such as non-Lambertian reflectance and incorrect spatial sampling, which violate our underlying assumptions and cause biases that cannot be accounted for in the predicted error budget.
      @inproceedings{ref102,
        title = {Predicting spatial uncertainties in stereo photogrammetry: achievements and intrinsic limitations},
        author = {A. Jalobeanu},
        booktitle = {7th International Symposium on Spatial Data Quality},
        url = {http://www.mat.uc.pt/issdq2011/},
        address = {Coimbra, Portugal},
        month = {Oct},
        year = {2011}
      }
    • A. Jalobeanu, J.A. Gonçalves: “Probabilistic surface change detection and measurement from digital aerial stereo images” - IEEE International Geoscience & Remote Sensing Symposium (IGARSS'10), Honolulu, Hawaii, USA, Jul 2010
      We propose a new method to measure changes in terrain topography from two optical stereo image pairs acquired at different dates. The main novelty is in the ability of computing the spatial distribution of uncertainty, thanks to stochastic modeling and probabilistic inference. Thus, scientists will have access to quantitative error estimates of local surface variation, so they can check the statistical significance of elevation changes, and make, where changes have occurred, consistent measurements of volume or shape evolution. The main application area is geomorphology, as the method can help study phenomena such as coastal cliff erosion, sand dune displacement and various transport mechanisms through the computation of volume changes. It can also help measure vegetation growth, and virtually any kind of evolution of the surface.
      We first start by inferring a dense disparity map from two images, assuming a known viewing geometry. The images are accurately rectified in order to constrain the deformation on one of the axes, so we only have to infer a one-dimensional parameter field. The probabilistic approach provides a rigorous framework for parameter estimation and error computation, so all the disparities are described as random variables. We define a generative model for both images given all model variables. It mainly consists of warping the scene using B-Splines, and defining a spatially adaptive stochastic model of the radiometric differences between the two views. The inversion, which is an ill-posed inverse problem, requires regularization, achieved through a smoothness prior model.
      Bayesian inference allows us to recover disparities as probability distributions. This is done on each stereo pair, then disparity maps are transformed into surface models in a common ground frame in order to perform the comparison. We apply this technique to high resolution digital aerial images of the Portuguese coast to detect cliff erosion and quantify the effects of weathering.
      @inproceedings{ref88,
        title = {Probabilistic surface change detection and measurement from digital aerial stereo images},
        author = {A. Jalobeanu and J.A. Gonçalves},
        booktitle = {IEEE International Geoscience & Remote Sensing Symposium },
        url = {http://www.igarss10.org/},
        address = {Honolulu, Hawaii, USA},
        month = {Jul},
        year = {2010}
      }
    • A. Jalobeanu: “Predictive Spatial Accuracy of Digital Elevation Models Generated from Stereo Pairs” - Ninth International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences (Accuracy'10), Leicester, UK, Jul 2010
      A new method for reconstructing digital elevation models (DEM) from optical stereo pairs is proposed. The main originality is the ability to propagate errors from the observed data to the final result, providing all the spatial accuracy estimates required for the use of topography in planetary or Earth science applications. In general, stereo-derived DEMs lack quantitative error estimates. This can be a major issue when the result is used to derive physical measurements in areas such as hydrology or geomorphology. We aim at performing error prediction, rather than error assessment as usually done in the community through the use of reference data sets. Indeed, the goal is to use only the information present in the available data to predict the errors, since we believe it is the only way to build relevant spatially adaptive accuracy maps. Existing techniques usually provide only a global accuracy measure after a validation procedure, or at best propose to predict the local behavior of accuracy from morphological indicators, failing to capture the dependence upon the image content. We think that predictive accuracy computation shall replace the error assessment step, thus allowing for a fully automated DEM generation with relevant error maps. A Bayesian approach is used, which provides a rigorous way of estimating uncertainties and various parameters. We start by defining a forward model, consisting of warping the observed scene through a disparity map and assuming a spatially adaptive radiometric change map to achieve robustness to noise and reflectance effects. An a priori smoothness prior model is introduced in order to stabilize the solution. Solving the inverse problem to recover the disparity map from noisy measurements requires to optimize an energy function. We employ fast deterministic techniques to recover an posteriori probability density function (pdf) of the disparity map. Finally, the disparities are converted into a DEM through a geometric camera model. Combining disparity and camera calibration errors allows for a comprehensive error propagation from the input to the final DEM.
      @inproceedings{ref95,
        title = {Predictive Spatial Accuracy of Digital Elevation Models Generated from Stereo Pairs},
        author = {A. Jalobeanu},
        booktitle = {Ninth International Symposium on Spatial Accuracy Assessment in Natural Resources and Environmental Sciences },
        url = {http://www.le.ac.uk/geography/accuracy/},
        address = {Leicester, UK},
        month = {Jul},
        year = {2010}
      }
    • A. Jalobeanu: “Spatial Accuracy Assessment of Digital Elevation Models: A Probabilistic Approach” - American Society for Photogrammetry and Remote Sensing annual conference (ASPRS'09), Baltimore, MD, USA, Mar 2009
      We propose a new method for the measurement of high resolution topography from an optical stereo pair. The main contribution is the ability to propagate errors from the imperfect observed data to the final result, providing all accuracy estimates required for the use of topography in planetary or Earth science applications. Indeed, digital elevation models (DEM) computed from images using state of the art methods usually lack quantitative error estimates. This can be a major issue when the result is used to measure actual physical parameters, such as slope or terrain roughness.
      Thus, we propose a new algorithm to infer a dense bidimensional disparity map from two images, that also estimates the spatial distribution of errors. We use a probabilistic approach, which provides a rigorous way of estimating parameters and uncertainties. All the parameters are defined as random variables within a Bayesian framework. We start by building a forward model, which consists of warping the observed scene using B-Splines and using a spatially adaptive radiometric change map for robustness purposes. An a priori smoothness model is introduced in order to stabilize the solution. Solving the inverse problem to recover the disparity map requires to optimize a global non-convex energy function, which is difficult task. A deterministic optimization based on a multi-grid strategy, followed by a local energy analysis at the optimum, allows to recover the a posteriori probability density function (pdf) of the disparity, which encodes both the optimal solution and the related error map.
      Finally, the disparity field is converted into a DEM through a geometric camera model. This model is either known initially, or calibrated using the estimated disparity map and extra data (existing low-resolution DEM or ground control points). Automatic calibration from uncertain disparity and topographic data allows for a comprehensive error propagation from the input data to the final elevation model.
      @inproceedings{ref84,
        title = {Spatial Accuracy Assessment of Digital Elevation Models: A Probabilistic Approach},
        author = {A. Jalobeanu},
        booktitle = {American Society for Photogrammetry and Remote Sensing annual conference},
        url = {http://www.asprs.org/baltimore09/},
        address = {Baltimore, MD, USA},
        month = {Mar},
        year = {2009}
      }
    • M.V. Joshi, A. Jalobeanu: “A MAP estimation for Multiresolution Fusion in Remotely Sensed Images using an IGMRF Prior Model” - IEEE International Geoscience & Remote Sensing Symposium (IGARSS'08), Boston MA, USA, Jul 2008
      In this paper we propose a model based approach for multi-resolution fusion of satellite images. Given the high spatial resolution panchromatic (Pan) image and a low spatial and high spectral resolution multi-spectral (MS) image acquired over the same geographical area, the problem is to generate a high spatial and high spectral resolution multi-spectral image. This is clearly an ill-posed problem, which requires a proper regularization. We model each of the low spatial resolution MS images as the aliased and noisy versions of their corresponding high spatial resolution images. A decimation (aliasing) matrix is estimated for each of the MS bands by using the available Pan and the MS image. The high spatial resolution MS images to be estimated are then modeled as separate Inhomogeneous Gaussian Markov Random Fields (IGMRFs) and the Maximum A Posteriori (MAP) estimation is used to obtain the fused images. The required IGMRF parameters representing the spatial correlation among high resolution MS pixels are estimated from the available high resolution Pan image and are used in the prior model during the regularization. Since the method does not directly operate on the Pan pixel values as most of the other methods do, the spectral distortion is minimum and the spatial properties are better preserved in the fused image as the IGMRF parameters are learnt at every pixel. We demonstrate the effectiveness of our approach by conducting experiments on synthetic data as well as on real images captured by the Quickbird satellite.
      @inproceedings{ref81,
        title = {A MAP estimation for Multiresolution Fusion in Remotely Sensed Images using an IGMRF Prior Model},
        author = {M.V. Joshi and A. Jalobeanu},
        booktitle = {IEEE International Geoscience & Remote Sensing Symposium },
        url = {http://www.igarss08.org/},
        address = {Boston MA, USA},
        month = {Jul},
        year = {2008}
      }
    • A. Jalobeanu, D.D. Fitzenz: “Inferring deformation fields from multidate satellite images” - IEEE International Geoscience & Remote Sensing Symposium (IGARSS'08), Boston MA, USA, Jun 2008
      We focus on a geophysical application of image processing: the measurement of high resolution ground deformation from two optical satellite images taken at different dates. Disparity maps estimated from image pairs usually lack quantitative error estimates. This is a major issue for measuring physical parameters, such as ground deformation or topography variations. Thus, we propose a new method to infer the disparity map. We adopt a probabilistic approach, treating all parameters as random variables, which provides a rigorous framework for parameter estimation and uncertainty evaluation. We start by defining a generative model of the data given all model variables. This forward model consists of warping the scene using B-Splines and applying a spatially adaptive radiometric change map. Then we use Bayesian inference to invert and recover the a posteriori probability density function (pdf) of the disparity map. The method is validated on multidate SPOT 5 imagery related to the Bam earthquake (Iran), showing results compatible with INSAR measurements.
      @inproceedings{ref80,
        title = {Inferring deformation fields from multidate satellite images},
        author = {A. Jalobeanu and D.D. Fitzenz},
        booktitle = {IEEE International Geoscience & Remote Sensing Symposium },
        url = {http://www.igarss08.org/},
        address = {Boston MA, USA},
        month = {Jun},
        year = {2008}
      }
    • A. Jalobeanu, D.D. Fitzenz: “Robust disparity maps with uncertainties for 3D surface reconstruction or ground motion inference” - ISPRS Proc. of Photogrammetric Image Analysis (PIA'07), Munich, Germany, Sep 2007
      Disparity maps estimated using computer vision-derived algorithms usually lack quantitative error estimates. This can be a major issue when the result is used to measure reliable physical parameters, such as topography for instance. Thus, we developed a new method to infer the dense disparity map from two images. We use a probabilistic approach in order to compute uncertainties as well. Within this framework, parameters are described in terms of random variables. We start by defining a generative model for both raw observed images given all model variables, including disparities. The forward model mainly consists of warping the scene using B-Splines and adding a radiometric change map. Then we use Bayesian inference to invert and recover the a posteriori probability density function (pdf) of the disparity map.
      The main contributions are: The design of an efficient fractal model to take into account radiometric changes between images; A multigrid processing so as to speed up the optimization process; The use of raw data instead of orthorectified imagery; Efficient approximation schemes to integrate out unwanted parameters and compute uncertainties on the result. Three applications could benefit from this disparity inference method: DEM generation from a stereo pair (along or across track), automatic calibration of pushbroom cameras, and ground deformation estimation from two images at different dates.
      @inproceedings{ref71,
        title = {Robust disparity maps with uncertainties for 3D surface reconstruction or ground motion inference},
        author = {A. Jalobeanu and D.D. Fitzenz},
        booktitle = {ISPRS Proc. of Photogrammetric Image Analysis},
        url = {http://www.ipk.bv.tum.de/isprs/pia07/},
        address = {Munich, Germany},
        month = {Sep},
        year = {2007}
      }
    • A. Jalobeanu, J.A. Gutiérrez: “Inverse covariance simplification for efficient uncertainty management” - Proc. of 26th workshop on Bayesian Inference and Maximum Entropy methods (MaxEnt'07), Saratoga Springs, NY, USA, Jul 2007
      When it comes to manipulating uncertain knowledge such as noisy observations of physical quantities, one may ask how to do it in a simple way. Processing corrupted signals or images always propagates the uncertainties from the data to the final results, whether these errors are explicitly computed or not. When such error estimates are provided, it is crucial to handle them in such a way that their interpretation, or their use in subsequent processing steps, remain user-friendly and computationally tractable. A few authors follow a Bayesian approach and provide uncertainties as an inverse covariance matrix. Despite its apparent sparsity, this matrix contains many small terms that carry little information. Methods have been developed to select the most significant entries, through the use of information-theoretic tools for instance. One has to find a Gaussian pdf that is close enough to the posterior pdf, and with a small number of non-zero coefficients in the inverse covariance matrix. We propose to restrict the search space to Markovian models (where only neighbors can interact), well-suited to signals or images. The originality of our approach is in conserving the covariances between neighbors while setting to zero the entries of the inverse covariance matrix for all other variables. This fully constrains the solution, and the computation is performed via a fast, alternate minimization scheme involving quadratic forms. The Markovian structure advantageously reduces the complexity of Bayesian updating (where the simplified pdf is used as a prior). Moreover, uncertainties exhibit the same temporal or spatial structure as the data.
      @inproceedings{ref70,
        title = {Inverse covariance simplification for efficient uncertainty management},
        author = {A. Jalobeanu and J.A. Gutiérrez},
        booktitle = {Proc. of 26th workshop on Bayesian Inference and Maximum Entropy methods},
        url = {http://www.maxent2007.org/},
        address = {Saratoga Springs, NY, USA},
        month = {Jul},
        year = {2007}
      }
    • A. Jalobeanu, E. Slezak, J.A. Gutiérrez: “Multisource data fusion and super-resolution from astronomical images” - Astronomical Data Analysis IV (ADA IV), Marseille, France, Sep 2006
      Virtual Observatories give us access to huge amounts of image data that are often redundant. Our goal is to take advantage of this redundancy by combining images of the same field of view into a single object. To achieve this goal, we propose to develop a multi-source data fusion method that relies on probability and band-limited signal theory. The target object is an image to be inferred from a number of blurred and noisy sources, possibly from different sensors under various conditions (i.e. resolution, shift, orientation, blur, noise...). We aim at the recovery of a compound model "image+uncertainties" that best relates to the observations and contains a maximum of useful information from the initial data set. Thus, in some cases, spatial super-resolution may be required in order to preserve the information. We propose to use a Bayesian inference scheme to invert a forward model, which describes the image formation process for each observation, and takes into account some a priori knowledge (e.g. stars as point sources). This involves both automatic registration and resampling, which are ill-posed inverse problems that are addressed within a rigorous Bayesian framework. The originality of the work is in devising a new technique of multi-image data fusion that provides us with super-resolution, self-calibration and possibly model selection capabilities. This approach should outperform existing methods such as resample-and-add or drizzling since it can handle different instrument characteristics for each input image and compute uncertainty estimates as well. Moreover, it is designed to also work in a recursive way, so that the model can be updated when new data becomes available.
      @inproceedings{ref60,
        title = {Multisource data fusion and super-resolution from astronomical images},
        author = {A. Jalobeanu and E. Slezak and J.A. Gutiérrez},
        booktitle = {Astronomical Data Analysis IV},
        url = {http://www.oamp.fr/conf/ada4/},
        address = {Marseille, France},
        month = {Sep},
        year = {2006}
      }
    • A. Jalobeanu, J.A. Gutiérrez: “Multisource data fusion for bandlimited signals: a Bayesian perspective” - Proc. of 25th workshop on Bayesian Inference and Maximum Entropy methods (MaxEnt'06), Paris, France, Aug 2006
      We consider data fusion as the reconstruction of a single model from multiple data sources. The model is to be inferred from a number of blurred and noisy observations, possibly from different sensors under various conditions. It is all about recovering a compound object, signal+uncertainties, that best relates to the observations and contains all the useful information from the initial data set. 
      We wish to provide a flexible framework for bandlimited signal reconstruction from multiple data. In this paper, we focus on a general approach involving forward modeling (prior model, data acquisition) and Bayesian inference. The proposed method is valid for n-D objects (signals, images or volumes) with multidimensional spatial elements. For the sake of clarity, both formalism and test results will be shown in 1D for single band signals. The main originality lies in seeking an object with a prescribed bandwidth, hence our choice of a B-Spline representation. This ensures an optimal sampling in both signal and frequency spaces, and allows for a shift invariant processing.
      The model resolution, the geometric distortions, the blur and the regularity of the sampling grid can be arbitrary for each sensor. The method is designed to handle realistic Gauss+Poisson noise.
      We obtained promising results in reconstructing a super-resolved signal from two blurred and noisy shifted observations, using a Gaussian Markov chain as a prior. Practical applications are under development within the SpaceFusion project. For instance, in astronomical imaging, we aim at a sharp, well-sampled, noise-free and possibly super-resolved image. Virtual Observatories could benefit from such a way to combine large numbers of multispectral images from various sources. In planetary imaging or remote sensing, a 3D image formation model is needed; nevertheless, this can be addressed within the same framework.
      @inproceedings{ref58,
        title = {Multisource data fusion for bandlimited signals: a Bayesian perspective},
        author = {A. Jalobeanu and J.A. Gutiérrez},
        booktitle = {Proc. of 25th workshop on Bayesian Inference and Maximum Entropy methods},
        url = {http://djafari.free.fr/maxent2006/},
        address = {Paris, France},
        month = {Aug},
        year = {2006}
      }
    • A. Jalobeanu: “Multisource data fusion and super-resolution from astronomical images” - Statistical Challenges in Modern Astronomy IV (SCMA'IV), Penn State, PA, USA, Jun 2006
      The goal is to combine multiple astronomical images of the same field of view into a single model, within the Virtual Observatory framework where the huge amounts of data often exhibit some redundancy. To achieve this goal, we propose to develop a multi-source data fusion method using probability theory. We want to infer an image from several blurred and noisy observations, possibly from different sensors and instruments under various conditions. We aim at the recovery of a compound object "image+uncertainties" that contains a maximum of useful information from the initial data set. In some cases, conserving information may require achieving super-resolution.
      We propose to use a Bayesian inference scheme to invert a generative model that explains the image formation for each observation while taking into account a priori knowledge. Understanding the image formation process is crucial.
      The originality of the work is in devising a new technique of multi-image data fusion that also addresses spatial super-resolution and recursive model updating. This involves both automatic registration and resampling, which are difficult inverse problems that are treated within a probabilistic framework. Our contribution outperforms state of the art methods in astronomy since it can handle different instrument characteristics for each input and provide uncertainty estimates as well.
      @inproceedings{ref59,
        title = {Multisource data fusion and super-resolution from astronomical images},
        author = {A. Jalobeanu},
        booktitle = {Statistical Challenges in Modern Astronomy IV},
        url = {http://astrostatistics.psu.edu/scma4/},
        address = {Penn State, PA, USA},
        month = {Jun},
        year = {2006}
      }
    • A. Jalobeanu: “Bayesian Vision for Shape Recovery” - Proc. of 24th workshop on Bayesian Inference and Maximum Entropy methods (MaxEnt'04), Garching-Munich, Germany, Jul 2004
      We present a new Bayesian vision technique that aims at recovering a shape from two or more noisy observations taken under similar lighting conditions. The shape is parametrized by a piecewise linear height field, textured by a piecewise linear irradiance field, and we assume Gaussian Markovian priors for both shape vertices and irradiance variables. The modeled observation process, equivalent to rendering, is modeled by a non-affine projection (e.g. perspective projection) followed by a convolution with a piecewise linear point spread function, and contamination by additive Gaussian noise. We assume that the observation parameters are calibrated beforehand.
      The major novelty of the proposed method consists of marginalizing out the irradiances considered as nuisance parameters, which is achieved by a hierarchy of approximations. This reduces the inference to minimizing an energy that only depends on the shape vertices, and therefore allows an efficient Iterated Conditional Mode (ICM) optimization scheme to be implemented. A Gaussian approximation of the posterior shape density is computed, thus providing estimates of both the geometry and its uncertainty. We illustrate the effectiveness of the new method by shape reconstruction results in a 2D case. A 3D version is currently under development and aims at recovering a surface from multiple images, reconstructing the topography by marginalizing out both albedo and shading.
      @inproceedings{ref2,
        title = {Bayesian Vision for Shape Recovery},
        author = {A. Jalobeanu},
        booktitle = {Proc. of 24th workshop on Bayesian Inference and Maximum Entropy methods},
        url = {http://www.etjaynescenter.org/maxent/2004/},
        address = {Garching-Munich, Germany},
        month = {Jul},
        year = {2004}
      }

    Abstracts, Posters, Preprints, Reports and Theses

    • A. Jalobeanu, H. Almeida: “DSM generation from stereo aerial images for the reconstruction of the sea-cliff retreat pattern controlled by gullying process, Costa da Galé and Melides sectors (Southwest of Portugal)” - 32th International Geographical Congress, Cologne, Germany, Aug 2012
      The seacliffs evolution is an important aspect to be taken in account in the evolution of the world coastline. The seacliffs can suffer erosion induced by the storm wave incidence or subaerial erosion leading to the retreat of the coastline. However the amount of sediments that come from the cliff retreat represent an important sediment source to the coastal system. In some cases it is essential to include this volume in the sediment budget balance of the studied coastal area.
      Many methods have been developed to monitor the evolution of seacliffs, most of them are supported by field measurements. In these work you propose the application of a new stereo photogrammetric method to reconstruct the cliff topography producing digital surface model (DSM) revealing the spatial distribution of the elevation errors. The model results are complemented by the acquisition of field data (GCP-ground control points) obtained using the DGPS (Differential Global Positioning System). This method also allows the generation of a coarse Digital elevation model (DEM) of the bottom of the seacliffs.
      The field study was conducted considering two small stretches of the sandy embayed coastline between Tróia and Sines (Southwest of Portugal). In these sectors the backshore of the subaerial beach is limited landward by the presence of seacliffs that suffer subaerial erosion (gullying process). The seacliffs presents poorly consolidated sediments (sand, clay, granule and fine pebbles) that suffer subaerial erosion showing complex gully morphology between the top and the bottom of the cliff. The sediments eroded by this process are stored at the base of cliffs in the form of debris fans. During storm periods the subaerial beach significantly decreases its width and the sediments contained in debris fans suffers cut-off. The sediments are transported by the waves thereby entering in the coastal system.
      Two data series of digital aerial images at 20 cm resolution, acquired in 2008 and 2009, were used to reconstruct cliffs digital surface models (DSM) and monitor the evolution of the complex gully system. A data set of 50 GCP was used to constrain the sensor location and orientation. The method was able to detect the presence of main areas of cliff displacement although the sensitivity of camera calibration prevented the absolute estimation of the displacement rate. New field surveys should help improve the results.
      @misc{ref112,
        title = {DSM generation from stereo aerial images for the reconstruction of the sea-cliff retreat pattern controlled by gullying process, Costa da Galé and Melides sectors (Southwest of Portugal)},
        howpublished = {32th International Geographical Congress},
        url = {https://igc2012.org/frontend/index.php},
        author = {A. Jalobeanu and H. Almeida},
        address = {Cologne, Germany},
        month = {Aug},
        year = {2012}
      }
    • A. Jalobeanu: “Predicting spatial uncertainties in stereo photogrammetry: achievements and intrinsic limitations” - Journal of Spatial Science, submitted, Nov 2011
      We present a new probabilistic method for digital surface model generation from optical stereo pairs, with an expected ability to propagate errors from the data to the final result, providing spatial uncertainty estimates to be used for quantitative analysis in planetary or Earth sciences. Existing stereo-derived surfaces lack rigorous, quantitative error estimates, and we propose to address this issue by deriving a method of error prediction, rather than error assessment as usually done in the area through the use of reference data. We use only the information present in the available data and perform the prediction using Bayesian inference. We start by defining a forward model, using an adaptive radiometric change map to achieve robustness to noise and reflectance effects. A priori smoothness constraints are introduced to stabilize the solution. Solving the inverse problem to recover a surface from noisy data involves fast deterministic optimization techniques. Though the reconstruction results look satisfactory, we conclude that uncertainty estimates computed from two images only are unreliable, which is due to major limitations of stereo, such as non-Lambertian reflectance and incorrect spatial sampling, which violate our underlying assumptions and cause biases that cannot be accounted for in the predicted error budget.
      @unpublished{ref110,
        title = {Predicting spatial uncertainties in stereo photogrammetry: achievements and intrinsic limitations},
        howpublished = {Journal of Spatial Science},
        author = {A. Jalobeanu},
        address = {submitted},
        month = {Nov},
        year = {2011}
      }
    • A. Jalobeanu: “Impact of DEM uncertainties on flood maps: vulnerability of the Portuguese coast to sea level rise” - 6º Simpósio de Meteorologia e Geofísica da APMG, Aldeia dos Capuchos, Portugal, Mar 2009
      Flood maps are usually computed by thresholding digital elevation models (DEM) without taking into account errors on the topography. Even if scientists wish to do so in the future, the only information about DEM uncertainty available now is a RMS error at best. Thus, we propose to use our recent work on uncertainty estimation, allowing us to reconstruct a DEM and the spatial distribution of errors as well. Indeed, relevant flood maps can be derived rigorously if the elevation data comes with error bars. Flood probability maps could be directly computed, either for predefined sea levels, or for uncertain sea level rise predictions coming from global climate change models. The Bayesian framework allows for a rigorous management of various error sources so as to produce physically meaningful vulnerability maps. We plan to apply this methodology to several test sites on the portuguese coast using high-resolution digital aerial imagery.
      @misc{ref83,
        title = {Impact of DEM uncertainties on flood maps: vulnerability of the Portuguese coast to sea level rise},
        howpublished = {6º Simpósio de Meteorologia e Geofísica da APMG},
        url = {http://simposio.apmg.pt/},
        author = {A. Jalobeanu},
        address = {Aldeia dos Capuchos, Portugal},
        month = {Mar},
        year = {2009}
      }
    • A. Jalobeanu: “Probabilistic Digital Elevation Model Generation For Spatial Accuracy Assessment” - AGU Fall Meeting, San Francisco, CA, USA, Dec 2008
      We propose a new method for the measurement of high resolution topography from a stereo pair. The main application area is the study of planetary surfaces.
      Digital elevation models (DEM) computed from image pairs using state of the art algorithms usually lack quantitative error estimates. This can be a major issue when the result is used to measure actual physical parameters, such as slope or terrain roughness.
      Thus, we propose a new method to infer a dense bidimensional disparity map from two images, that also estimates the spatial distribution of errors. We adopt a probabilistic approach, which provides a rigorous framework for parameter estimation and uncertainty evaluation. All the parameters are described in terms of random variables within a Bayesian framework. We start by defining a forward model, which mainly consists of warping the observed scene using B-Splines and using a spatially adaptive radiometric change map for robustness purposes. An a priori smoothness model is introduced in order to stabilize the solution. Solving the inverse problem to recover the disparity map requires to optimize a global non-convex energy function, which is difficult in practice due to multiple local optima. A deterministic optimization technique based on a multi-grid strategy, followed by a local energy analysis at the optimum, allows to recover the a posteriori probability density function (pdf) of the disparity, which encodes both the optimal solution and the related error map.
      Finally, the disparity field is converted into a DEM through a geometric camera model. This camera model is either known initially, or calibrated automatically using the estimated disparity map and available measurements of the topography (existing low-resolution DEM or ground control points). Automatic calibration from uncertain disparity and topography measurements allows for efficient error propagation from the initial data to the generated elevation model.
      Results from Mars Express HRSC data are presented. A pair of images (including the nadir view) at 30m resolution was used to obtain a DEM with a vertical accuracy better than 10m in well-textured areas. The lack of information in smooth regions naturally led to large uncertainty estimates.
      @misc{ref82,
        title = {Probabilistic Digital Elevation Model Generation For Spatial Accuracy Assessment},
        howpublished = {AGU Fall Meeting},
        url = {http://www.agu.org/meetings/fm08/},
        author = {A. Jalobeanu},
        address = {San Francisco, CA, USA},
        month = {Dec},
        year = {2008}
      }
    • S. Sharma: “Camera Calibration for Pushbroom Scanners - A Bayesian Approach” - LSIIT-PASEO report, , Dec 2007
      In this research, we address the problem of calibration of pushbroom sensors. Here we propose to use a locally affine camera model obtained using a Bayesian approach for calibration of pushbroom camera parameters, given correspondences between a stereo pair of images. The final aim is to extend this locally affine camera model to develop a reliable 3D model of the imaged area. The advantage of the proposed approach is that we make minimum use of external data such as GCPs, a large number of which may not always be available.
      @techreport{ref74,
        title = {Camera Calibration for Pushbroom Scanners - A Bayesian Approach},
        institution = {LSIIT-PASEO},
        url = {protected/CamCalibration_Report_Swati.pdf},
        author = {S. Sharma},
        month = {Dec},
        year = {2007}
      }
    • J.A. Gutiérrez: “Simplification of the covariance matrix (SpaceFusion project)” - LSIIT-PASEO report, , Jan 2007
      Graphical structures may be used to approximate a given stochastic process. Moreover, a Markov structure can adequately show off correlation present in the covariance matrix by defining the number of nodes that would represent auxiliary explaining variables.
      Specifically, a trade-off regarding the number and disposition (connectivity) of these nodes takes place to capture variance and covariance without incrementing too much the dimensionality order of the Markov model, that is to say a balance between model accuracy and algorithmic tractability takes place while adding or cutting edges and creating or deleting loops in the model.
      In order to deal with uncertainties of very large covariance matrices, like the ones coming out from satellite or astronomical imagery, a block-sweeping technique can be utilised to simplify them without having a substantial information loss.
      For all modelling simplified structures, optimality is measured by the Kullback-Leibler (KL) divergence that defines the information loss as a distance between a random vector and its approximation.
      @techreport{ref67,
        title = {Simplification of the covariance matrix (SpaceFusion project)},
        institution = {LSIIT-PASEO},
        url = {protected/matrix_report.pdf},
        author = {J.A. Gutiérrez},
        month = {Jan},
        year = {2007}
      }

    Highlights – recent results

    Follow-up projects

    • AutoProbaDSM, funded by the Portuguese science foundation (FCT), 2009-2012. Fully automated probabilistic digital surface model generation from optical stereo pairs, using Bayesian inference. Application: build a high-resolution DSM of continental Portugal and the related spatial accuracy map.
    • DAHLIA (Work package Fusion), funded by ANR, 2009-2012. Hyperspectral data cube fusion: design of a forward model of the MUSE instrument, sequential fusion to deal with the large size of data sets, and inversion.
    • Coastal morphology monitoring in Portugal: erosion monitoring and probabilistic flood hazard assessment in the Troia-Sines arc. Digital aerial image data acquired thanks to the funding provided through SpaceFusion. Use of probabilistic surface model generation to quantify the topographic changes between two dates and to build fuzzy flood hazard maps.

    Main contributions

    • The use of a single parametric model that contains useful information from various sources, in a natural and user-friendly way.
    • Modeling of the image formation process (geometry, point spread function, sampling), depending on the application area and data type, based on the physics of image acquisition, through parametric or non-parametric models that are simple but flexible.
    • Bayesian inference and graphical model theory used for forward modeling, and inversion by way of marginalization and functional optimization, as well as the related approximations necessary to develop a fast and deterministic algorithm. It enables us to estimate model uncertainties as well.
    • Data fusion used as a tool to deal with data redundancy while taking advantage of their complementarity, implicitly minimizing the information loss. Thanks to uncertainty evaluation, the fusion can be performed recursively when new data become available to update the current model, thus allowing for large amounts of data to be processed.

    Recent results - Astronomy

    Accurate resampling scheme based on B-Splines, developed within the 2D data fusion project in astronomy.
    2D super-resolution in astronomy with inverse covariance map computation (synthetic data).

    Recent results - Planetary and remote sensing

    Directed graphical model (Bayesian network) showing the relations between the random variables of the model used for disparity map inference.
    Motion field estimated from a pair of multidate SPOT 5 images taken before and after an earthquake (Bam, Iran), showing the coseismic deformation.
    Disparity map and uncertainty estimated from a pair of multidate SPOT 5 images, clearly showing the topography despite the small stereo parallax.
    Preliminary results showing a DEM generated from Mars Express (HRSC) stereo images, and the related uncertainty map, using MOLA as a reference DEM for linear camera calibration.

    Documents

    Some of the pdf files may be password-protected.

    Progress reports

    Other documents


    Webmaster (C. Collet), (M.Louys)
    © IPSEO Group 2005-2016 | WebSite Info & Credits
    Last update: Dec 8, 2016
    IPSEO
    Home
    Research projects
    Highlights-Results
    Softwares

    People
    Collaborations

    Local Admin