Issue 
A&A
Volume 647, March 2021



Article Number  A105  
Number of page(s)  25  
Section  Numerical methods and codes  
DOI  https://doi.org/10.1051/00046361/201936399  
Published online  17 March 2021 
Unmixing methods based on nonnegativity and weakly mixed pixels for astronomical hyperspectral datasets
IRAP, Université de Toulouse, UPS, CNRS, CNES, 9 Av. colonel Roche, BP 44346, 31028 Toulouse cedex 4, France
email: olivier.berne@gmail.com
Received:
29
July
2019
Accepted:
11
November
2020
An increasing number of astronomical instruments (on Earth and spacebased) provide hyperspectral images, that is threedimensional data cubes with two spatial dimensions and one spectral dimension. The intrinsic limitation in spatial resolution of these instruments implies that the spectra associated with pixels of such images are most often mixtures of the spectra of the “pure” components that exist in the considered region. In order to estimate the spectra and spatial abundances of these pure components, we here propose an original blind signal separation (BSS), that is to say an unsupervised unmixing method. Our approach is based on extensions and combinations of linear BSS methods that belong to two major classes of methods, namely nonnegative matrix factorization (NMF) and sparse component analysis (SCA). The former performs the decomposition of hyperspectral images, as a set of pure spectra and abundance maps, by using nonnegativity constraints, but the estimated solution is not unique: It highly depends on the initialization of the algorithm. The considered SCA methods are based on the assumption of the existence of points or tiny spatial zones where only one source is active (i.e., one pure component is present). These points or zones are then used to estimate the mixture and perform the decomposition. In real conditions, the assumption of perfect singlesource points or zones is not always realistic. In such conditions, SCA yields approximate versions of the unknown sources and mixing coefficients. We propose to use part of these preliminary estimates from the SCA to initialize several runs of the NMF in order to refine these estimates and further constrain the convergence of the NMF algorithm. The proposed methods also estimate the number of pure components involved in the data and they provide error bars associated with the obtained solution. Detailed tests with synthetic data show that the decomposition achieved with such hybrid methods is nearly unique and provides good performance, illustrating the potential of applications to real data.
Key words: methods: statistical / methods: numerical
© A. Boulais et al. 2021
Open Access article, published by EDP Sciences, under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Telescopes keep growing in diameter, and detectors are more and more sensitive and made up of an increasing number of pixels. Hence, the number of photons that can be captured by astronomical instruments, in a given amount of time and at a given wavelength, has increased significantly, thus allowing astronomy to go hyperspectral. More and more, astronomers do not deal with 2D images or 1D spectra, but with a combination of these media resulting in threedimensional (3D) data cubes (two spatial dimensions, one spectral dimension). We hereafter provide an overview of the instruments that provide hyperspectral data in astronomy, mentioning specific examples without any objective to be exhaustive. Several integral field unit spectrographs (e.g., MUSE on the Very Large Telescope) provide spectral cubes at visible wavelengths, yielding access to the optical tracers of ionized gas (see for instance Weilbacher et al. 2015). Infrared missions such as the Infrared Space Observatory (ISO) and Spitzer performed spectral mapping in the midinfrared, a domain that is particularly suited to observe the emission of UV heated polycyclic aromatic hydrocarbon (e.g., Cesarsky et al. 1996; Werner et al. 2004). In the millimeter wavelengths, large spectral maps in rotational lines of abundant molecules (typically CO) have been used for several decades to trace the dynamics of molecular clouds (e.g., Bally et al. 1987; Miesch & Bally 1994; Falgarone et al. 2009). The PACS, SPIRE, and HIFI instruments, on board Herschel all have a mode that allows for spectral mapping (e.g. Van Kempen et al. 2010; Habart et al. 2010; Joblin et al. 2010) in atomic and molecular lines. Owing to its high spectral resolution, HIFI allows one to resolve the profiles of these lines, enabling one to study the kinematics of, for example, the immediate surroundings of protostars (Kristensen et al. 2011) or of starforming regions (Pilleri et al. 2012) using radiative transfer models. Similarly, the GREAT instrument on board the Stratospheric Observatory For Infrared Astronomy (SOFIA) now provides largescale spectral maps in the C^{+} line at 1.9 THz (Pabst et al. 2017). The Atacama Large Millimeter Array (ALMA) also provides final products that are spectral cubes (see e.g., Goicoechea et al. 2016). A majority of astronomical spectrographs to be employed at large observatories in the future will provide spectral maps. This is the case for the MIRI and NISPEC instruments on the James Webb Space Telescope (JWST) and the METIS instrument on the Extremely Large Telescope (ELT).
Although such 3D datasets have become common, few methods have been developed by astronomers to analyze the outstanding amount of information they contain. Classical analysis methods tend to decompose the spectra by fitting them with simple functions (typically mixtures of Gaussians) but this has several disadvantages: (1) the a priori assumption made by the use of a given function is usually not founded physically, (2) if the number of parameters is high, the result of the fit may be degenerate, (3) for large datasets and fitting with nonlinear functions, the fitting may be very time consuming, (4) initial guesses must be provided, and, (5) the spectral fitting is usually performed on a (spatial) pixel by pixel basis, so that the extracted components are spatially independent, whereas physical components are often present at large scales on the image. An alternative is to analyze the data by means of principal component analysis (e.g., Neufeld et al. 2007; Gratier et al. 2017), which provides a representation of the data in an orthogonal basis of a subspace, thus allowing interpretation. However, this may be limited by the fact that the principal components are orthogonal, and hence they are not easily interpretable in physical terms. An alternative analysis was proposed by Juvela et al. (1996), which is based on a Blind Signal Separation (BSS) approach. It consists in decomposing spectral cubes (in their case, CO spectral maps) into the product of a small number of spectral components, or “end members”, and spatial “abundance” maps.
This requires no a priori on spectral properties of the components, and hence this can provide deeper insights into the physical structure represented in the data, as demonstrated in this pioneering paper. This method uses the positivity constraint for the maps and spectra (all their points must be positive) combined with the minimization of a statistical criterion to derive the maps and spectral components. This method is referred to as positive matrix factorization (PMF, Paatero & Tapper 1994). Although it contained the original idea of using positivity as a constraint to estimate a matrix product, this work used a classical optimization algorithm. Several years later, Lee & Seung (1999) introduced a novel algorithm to perform PMF using simple multiplicative iterative rules, making the PMF algorithm extremely fast. This algorithm is usually referred to as Lee and Seung’s nonnegative matrix factorization (NMF) and has been widely used in a vast number of applications outside astronomy. This algorithm has proved to be efficient including in astrophysical applications (Berné et al. 2007). However, NMF has several disadvantages: (1) the number of spectra to be extracted must be given by the user, (2) the error bars related to the procedure are not derived automatically, (3) convergence to a unique point is not guaranteed and may depend on initialization (see Donoho & Stodden 2003 on these latter aspects). When applying NMF to astronomical hyperspectral data, the above drawbacks become critical and can jeopardize the integrity of the results.
In this paper, we evaluate possibilities to improve application of BSS to hyperspectral positive data by hybridizing NMF with sparsitybased algorithms. Here, we focus on synthetic data, so as to perform a detailed comparison of the performances of the proposed approaches. A first application on real data of one of the methods presented here is provided in Foschino et al. (2019). The proposed methods should be applicable to any hyperspectral dataset fulfilling the properties that we will describe hereafter. The paper is organized as follows. In the next section we present the adopted mathematical model for hyperspectral astronomical data, using tow possible conventions, spatial or spectral. We describe the mixing model and associated “blind signal separation” (BSS) problem. In Sect. 3, we describe the preliminary steps (preprocessing steps) that are required before applying the proposed algorithms. In Sect. 4 we describe in details the three methods that are used in this paper, that is, NMF (with an extension using a MonteCarlo approach referred to as MCNMF) and two methods based on sparsity (SpaceCORR and Maximum Angle Source Separation, MASS). We then detail how MCNMF can be hybridized with the latter two methods.
In Sect. 5, a comparative performance analysis of studied methods is performed. We conclude in Sect. 6.
2. Data model and blind source separation problem
The observed data consist of a spectral cube of dimension P_{x} × P_{y} × N where (p_{x}, p_{y}) define the spatial coordinates and f is the spectral index. To help one interpret the results, the spectral index is hereafter expressed as a Dopplershift velocity in km/s, using v = c × (f − f_{0})/f_{0}, with f the observed frequency, f_{0} the emitted frequency and c the light speed. We assume that all observed values in are nonnegative. We call each vector recorded at a position (p_{x}, p_{y}) “spectrum ” and we call each matrix recorded at a given velocity “spectral band”. Each observed spectrum corresponding to a given pixel results from a mixture of different kinematic components that are present on the line of sight of the instrument. Mathematically, the observed spectrum obtained for one pixel is then a combination (which will be assumed to be linear and instantaneous) of elementary spectra.
In order to recover these elementary spectra, one can use methods known as Blind Source Separation (BSS). BSS consists in estimating a set of unknown source signals from a set of observed signals that are mixtures of these source signals. The linear mixing coefficients are unknown and are also to be estimated. The observed spectral cube is then decomposed as a set of elementary spectra and a set of abundance maps (the contributions of elementary spectra in each pixel).
Considering BSS terminology and a linear mixing model, the matrix containing all observations is expressed as the product of a mixing matrix and a source matrix. Therefore, it is necessary here to restructure the hyperspectral cube into a matrix and to identify what we call “observations”, “samples”, “mixing coefficients”, and “sources”. A spectral cube can be modeled in two different ways: a spectral model where we consider the cube as a set of spectra and a spatial model where we consider the cube as a set of images (spectral bands), as detailed hereafter.
2.1. Spectral model
For the spectral data model, we define the observations as being the spectra . The data cube is reshaped into a new matrix of observations Obs (variables defined in this section are summarized in Table 1), where the rows contain the P_{x} × P_{y} = M observed spectra of arranged in any order and indexed by m. Each column of Obs corresponds to a given spectral sample with an integervalued index also denoted as v ∈ {1, …, N} for all observations. Each observed spectrum obs(m, .) is a linear combination of L (L ≪ M) unknown elementary spectra and yields a different mixture of the same elementary spectra:
Major variables.
where obs(m, v) is the vth sample of the mth observation, spec(ℓ, v) is the vth sample of the ℓth elementary spectrum and map(m, ℓ) defines the contribution scale of elementary spectrum ℓ in observation m. Using the BSS terminology, map stands for the mixing coefficients and spec stands for the sources. This model can be rewritten in matrix form:
where Map is an M × L mixing matrix and Spec is an L × N source matrix.
2.2. Spatial model
For the spatial data model, we define the observations as being the spectral bands . The construction of the spatial model is performed by transposing the spectral model (2). In this configuration, the rows of the observation matrix Obs^{T} (the transpose of the original matrix of observations Obs) contain the N spectral bands with a onedimensional structure. Each column of Obs^{T} corresponds to a given spatial sample index m ∈ {1, …, M} for all observations (i.e., each column corresponds to a pixel). Each spectral band Obs^{T}(v, .) is a linear combination of L (L ≪ N) unknown abundance maps and yields a different mixture of the same abundance maps:
where Obs^{T} is the transpose of the original observation matrix, Spec^{T} is the N × L mixing matrix and Map^{T} is the L × M source matrix. In this alternative data model, the elementary spectra in Spec^{T} stand for the mixing coefficients and the abundance maps in Map^{T} stand for the sources.
2.3. Problem statement
In this section, we denote the mixing matrix as A and the source matrix as S, whatever the nature of the adopted model (spatial or spectral) to simplify notations, the following remarks being valid in both cases.
The goal of BSS methods is to find estimates of a mixing matrix A and a source matrix S, respectively denoted as and , and such that:
However this problem is illposed. Indeed, if is a solution, then is also a solution for any invertible matrix P. To achieve the decomposition, we must add two extra constraints. The first one is a constraint on the properties of the unknown matrices and/or . The type of constraint (independence of sources, nonnegative matrices, sparsity) leads directly to the class of methods that will be used for the decomposition. The case of linear instantaneous mixtures was first studied in the 1980s, then three classes of methods became important:
– Independent component analysis (ICA; Cardoso 1998; Hyvärinen et al. 2001): It is based on a probabilistic formalism and requires the source signals to be mutually statistically independent. Until the early 2000s, ICA was the only class of methods available to achieve BSS.
– Nonnegative matrix factorization (NMF; Lee & Seung 1999): It requires the source signals and mixing coefficients values to be nonnegative.
– Sparse component analysis (SCA; Gribonval & Lesage 2006): It requires the source signals to be sparse in the considered representation domain (time, timefrequency, timescale, wavelet...).
The second constraint is to determine the dimensions of and . Two of these dimensions are obtained directly from observations X (M and N). The third dimension, common to both and matrices, is the number of sources L, which must be estimated.
Here, we consider astrophysical hyperspectral data that have the properties listed below. These are relatively general properties that are applicable to a number of cases with HerschelHIFI, ALMA, Spitzer, JWST, etc:
– They do not satisfy the condition of independence of the sources. In our simulated data, elementary spectra have, by construction, similar variations (Gaussian spectra with different means, see Sect. 5.1). Likewise, abundance maps associated with each elementary spectrum have similar shapes. Such data involve nonzero correlation coefficients between elementary spectra and between abundance maps. Hence ICA methods will not be discussed in this paper.
– These data are nonnegative if we disregard noise. Each pixel provides an emission spectrum, hence composed of positive or zero values. Such data thus correspond to the conditions of use of NMF that we detail in Sect. 4.1.
– If we consider the data in a spatial framework (spatial model), the cube provides a set of images. We can then formulate the hypothesis that there are regions in these images where only one source is present. This is detailed in Sect. 4.2. This hypothesis then refers to a “sparsity” assumption in the data and SCA methods are then applicable to hyperspectral cubes. On the contrary, sparsity properties do not exist in the spectral framework in our case, as discussed below.
– If the data have some sparsity properties, adding the nonnegativity assumption enables the use of geometric methods. The geometric methods are a subclass of BSS methods based on the identification of the convex hull containing the mixed data. However, the majority of geometric methods, which are used in hyperspectral unmixing in Earth observation, are not applicable to Astrophysics because they set an additional constraint on the data model: they require all abundance coefficients to sum to one in each pixel, which changes the geometrical representation of the mixed data. On the contrary, in Sect. 4.3, we introduce a geometric method called MASS, for Maximum Angle Source Separation (Boulais et al. 2015), which may be used in an astrophysical context (i.e., for data respecting the models presented above).
The sparsity constraint required for SCA and geometric methods is carried by the source matrix Spagination. These methods may therefore potentially be applied in two ways to the abovedefined data: either we suppose that there exist spectral indices for which a unique spectral source is nonzero, or we suppose that there exist some regions in the image for which a unique spatial source is zero. In our context of studying the properties of photodissociation regions, only the second case is realistic. Thus only the mixing model (3) is relevant. Therefore, throughout the rest of this paper, we will only use that spatial data model (3), so that we here define the associated final notations and vocabulary: let X = Obs^{T} be the (N × M) observation matrix, A = Spec^{T} the (N × L) mixing matrix containing the elementary spectra and S = Map^{T} the (L × M) source matrix containing the spatial abundance maps, each associated with an elementary spectrum.
Moreover, we note that in the case of the NMF, the spectral and spatial models are equivalent but the community generally prefers the more intuitive spectral model.
Before thoroughly describing the algorithms used for the aforementioned BSS methods, we present preprocessing stages required for the decomposition of data cubes.
3. Data preprocessing
3.1. Estimation of number of sources
An inherent problem in BSS is to estimate the number L of sources (the dimension shared by the and matrices). This parameter should be fixed before performing the decomposition in the majority of cases. Here, this estimate is based on the eigendecomposition of the covariance matrix of the data. As in Principal Component Analysis (PCA), we look for the minimum number of components that most contribute to the total variance of the data. Thus the number of high eigenvalues is the number of sources in the data. Let Σ_{X} be the (N × N) covariance matrix of observations X:
where λ_{i} is the ith eigenvalue associated with eigenvector e_{i} and X_{c} is the matrix of centered data (i.e. each observation has zero mean: ).
The eigenvalues of Σ_{X} have the following properties (their proofs are available in Deville et al. 2014):
Property 1. For noiseless data (X_{0} = AS), the Σ_{X} matrix has L positive eigenvalues and N − L eigenvalues equal to zero.
The number L of sources is therefore simply inferred from this property. Now, we consider the data with an additive spatially white noise E, with standard deviation σ_{E}, i.e., X = X_{0} + E. The relation between the covariance matrix Σ_{X0} of noiseless data and the covariance matrix Σ_{X} of noisy data is then:
where I_{N} is the identity matrix.
Property 2. The eigenvalues λ of Σ_{X} and the eigenvalues λ_{0} of Σ_{X0} are linked by:
These two properties then show that the ordered eigenvalues λ_{(i)} of Σ_{X} for a mixture of L sources are such that:
But in practice, because of the limited number of samples and since the strong assumption of a white noise with the same standard deviation in all pixels is not fulfilled, the equality is not met. However, the differences between the eigenvalues λ_{(L + 1)}, …, λ_{(N)} are small compared to the differences between the eigenvalues λ_{(1)}, …, λ_{(L)}. The curve of the ordered eigenvalues is therefore constituted of two parts. The first part, Ω_{S}, contains the first L eigenvalues associated with a strong contribution in the total variance. In this part, eigenvalues are significantly different. The second part, Ω_{E}, contains the other eigenvalues, associated with noise. In this part, eigenvalues are similar.
The aim is then to identify from which rank r = L + 1 eigenvalues no longer vary significantly. To this end, we use a method based on the gradient of the curve of ordered eigenvalues (Luo & Zhang 2000) in order to identify a break in this curve (see Fig. 5).
Moreover, a precaution must be taken concerning the difference between λ_{(L)} and λ_{(L + 1)}. In simulations, we found that in the noiseless case, it is possible that the last eigenvalues of Ω_{S} are close to zero. Thus, for very noisy mixtures, the differences between these eigenvalues become negligible relative to the noise variance . These eigenvalues are then associated with Ω_{E} and therefore rank r where a “break” appears will be underestimated.
The procedure described by Luo & Zhang (2000) is as follows:

Compute the eigendecomposition of the covariance matrix Σ_{X} and arrange the eigenvalues in decreasing order.

Compute the gradient of the curve of the logarithm of the L first (typically L = 20) ordered eigenvalues:

Compute the average gradient of all these eigenvalues:

Find all i satisfying to construct the set .

Select the index r, such that it is the first one of the last continuous block of i in the set {ℐ}.

The number of sources is then L = r − 1.
3.2. Noise reduction
The observed spectra are contaminated by noise. In synthetic data, this noise is added assuming it is white and Gaussian. Noise in real data may have different properties, however the aforementioned assumptions are made here in order to evaluate the sensitivity of the method to noise in the general case. To improve the performance of the above BSS methods, we propose different preprocessing stages to reduce the influence of noise on the results.
The first preprocessing stage consists of applying a spectral thresholding, i.e., only the continuous range of v containing signal is preserved. Typically many first and last channels contain only noise and are therefore unnecessary for the BSS. This is done for all BSS methods presented in the next section.
The second preprocessing stage consists of applying a spatial thresholding. Here, we must distinguish the case of each BSS method because the SCA method requires to retain the spatial structure of data. For NMF, the observed spectra (columns of X) whose “normalized power” is lower than a threshold α_{e} are discarded. Typically some spectra contain only noise and are therefore unnecessary for the spectra estimation step (Sect. 4.1). In our application, we set the threshold to . For the SCA method, some definitions are necessary to describe this spatial thresholding step. This procedure is therefore presented in the section regarding the method itself (Sect. 4.2).
Finally, synthetic and actual data from the HIFI instrument contain some negative values due to noise. To stay in the assumption of NMF, these values are reset to ϵ = 10^{−16}.
4. Blind signal separation methods
4.1. nonnegative matrix factorization and our extension
NMF is a class of methods introduced by Lee & Seung (1999). The standard algorithm iteratively and simultaneously computes and , minimizing an objective function of the initial X matrix and the product. In our case, we use the minimization of the Euclidean distance , using multiplicative update rules:
where ⊙ and ⊘ are respectively the elementwise product and division.
Lee and Seung show that the Euclidean distance δ is non increasing under these update rules (Lee & Seung 2001), so that starting from random and matrices, the algorithm will converge toward a minimum for δ. We estimate that the convergence is reached when:
where i corresponds to the iteration and κ is a threshold typically set to 10^{−4}.
The main drawback of standard NMF is the uniqueness of the decomposition. The algorithm is sensitive to the initialization due to the existence of local minima of the objective function (Cichocki et al. 2009). The convergence point highly depends on the distance between the initial point and a global minimum. A random initialization without additional constraint is generally not satisfactory. To improve the quality of the decomposition, several solutions are possible:
– Use a MonteCarlo analysis to estimate the elementary spectra and then rebuild the abundance maps (Berné et al. 2012).
– Further constrain the convergence by altering the initialization (Langville et al. 2006).
– Use additional constraints on the sources and/or mixing coefficients, such as sparsity constraints (Cichocki et al. 2009), or geometric constraints (Miao & Qi 2007).
The addition of geometric constraints is usually based on the sumtoone of the abundance coefficients for each pixel (). This condition is not realistic in an astrophysical context, where the total power received by the detectors vary from a pixel to another. Therefore, this type of constraints cannot be applied here. A standard type of sparsity constraints imposes a sparse representation of the estimated matrices and/or in the following sense: the spectra and/or the abundance maps have a large number of coefficients equal to zero or negligible. Once again, this property is not verified in the data that we consider and so this type of constraint cannot be applied. However, the above type of sparsity must be distinguished from the sparsity properties exploited in the SCA methods used in this paper. This is discussed in Sects. 4.2 and 4.3 dedicated to these methods.
Moreover, wellknown indeterminacies of BSS appear in the and estimated matrices. The first one is a possible permutation of sources in . The second one is the presence of a scale factor per estimated source. To offset these scale factors, the estimated source spectra are normalized so that:
where a_{ℓ} is the ℓth column of A. This normalization allows the abundance maps to be expressed in physical units.
To improve the results of standard NMF, we extend it as follows. First, the NMF is amended to take into account the normalization constraint (14). At each iteration (i.e. each update of according to (11)), the spectra are normalized in order to avoid the scale indeterminacies. Then NMF is complemented by a MonteCarlo analysis described hereafter. Finally, we propose an alternative to initialize NMF with results from one of the SCA methods described in Sects. 4.2 and 4.3.
The NMFbased method used here (called MCNMF hereafter), combining standard NMF, normalization and MonteCarlo analysis, has the following structure:
– The MonteCarlo analysis stage gives the most probable samples of elementary spectra and error bars associated with these estimates provided by the normalized NMF.
– The combination stage recovers abundance map sources from the above estimated elementary spectra and observations.
These two stages are described hereafter:
1. MonteCarlo analysis. Assuming that the number of sources L is known (refer to Sect. 3.1 for its estimation), NMF is ran p times, with different initial random matrices for each trial (p is typically equal to 100). In each run, a set of L elementary spectra are identified. The total number of obtained spectra at the end of this process is p × L. These spectra are then grouped into L sets {ω_{1}, ω_{2}, …, ω_{L}}, each set representing the same column of . To achieve this clustering, the method uses the Kmeans algorithm (Theodoridis & Koutroumbas 2009) with a correlation criterion, provided in Matlab (kmeans). More details about the Kmeans algorithm are provided in Appendix B.
To then derive the estimated value of each elementary spectrum, at each velocity v in a set ω_{ℓ}, we estimate the probability density function (pdf) f_{ωℓ, v} from the available p intensities with the Parzen kernel method provided in Matlab (ksdensity). Parzen kernel (Theodoridis & Koutroumbas 2009) is a parametric method to estimate the pdf of a random variable at any point of its support. For more details about this method, refer to Appendix B.
Each estimated elementary spectrum is obtained by selecting the intensity u that has the highest probability at a given wavelength:
The estimation error at each wavelength v for a given elementary spectrum is obtained by selecting the intensities whose pdf values are equal to max(f_{ωℓ, v})/2. Let [α_{ℓ}(n),β_{ℓ}(n)] be the error interval of such that:
The two endpoints α_{ℓ}(n) and β_{ℓ}(n) are respectively the lower and upper error bounds for each velocity. We illustrate this procedure in Fig. 1 showing an example of pdf annotated with the different characteristic points defined above.
Fig. 1.
Probability density function f_{ωℓ, n} of intensities of the set ω_{ℓ} at a given velocity v. 
2. Combination stage. This final step consists of estimating the L spatial sources from the estimation of elementary spectra and observations, under the nonnegativity constraint. Thus for each observed spectrum of index m ∈ {1, …, M}, the sources are estimated by minimizing the objective function:
where x_{m} is the mth observed spectrum (i.e., the mth column of X) and the estimation of spatial contributions associated with each elementary spectrum (i.e., the mth column of ). This is done by using the classical nonnegative least square algorithm (Lawson 1974). We here used the version of this algorithm provided in Matlab (lsqnonneg). The abundance maps are obtained by resizing the columns of into P_{x} × P_{y} matrices (reverse process as compared with resizing ).
4.2. Sparse component analysis based on singlesource zones
SCA is another class of BSS methods, based on the sparsity of sources in a given representation domain (time, space, frequency, timefrequency, timescale). It became popular during the 2000s and several methods then emerged. The first SCA method used in this paper is derived from TIFCORR introduced by Deville & Puigt (2007). In the original version, the method is used to separate onedimensional signals, but an extension for images has been proposed by Meganem et al. (2010). This type of method is based on the assumption that there are some small zones in the considered domain of analysis where only one source is active, i.e., it has zero mean power in these zones called singlesource zones. We here use a spatial framework (see model (3)), so that we assume that spatial singlesource zones exist in the cube .
The sparsity considered here does not correspond to the same property as the sparsity mentioned in Sect. 4.1. In order to clarify this distinction, we introduce the notion of degree of sparsity. Sparse signals may have different numbers of coefficients equal to zero. If nearly all the coefficients are zero, we define the signal as highly sparse. On the contrary, if only a few coefficients are zero, we define the signal as weakly sparse.
The sparsity assumption considered in Sect. 4.1 corresponds to the case when the considered signal (spectrum or abundance map) contains a large number of negligible coefficients. This therefore assumes a high sparsity, which is not realistic in our context. On the contrary, the sparsity assumption used in the BSS method derived from TIFCORR considered here only consists of requiring the existence of a few tiny zones in the considered domain (spatial domain in our case) where only one source is active. More precisely, separately for each source, that BSS method only requires the existence of at least one tiny zone (typically 5 × 5 pixels) where this source is active, and this corresponds to Assumption 1 defined below. We thus only require a weak spatial sparsity. More precisely, we use the joint sparsity (Deville 2014) of the sources since we do not consider the sparsity of one source signal alone (i.e., the inactivity of this signal on a number of coefficients) but we consider the spatial zones where only one source signal is active, whereas the others are simultaneously inactive. This constraint of joint sparsity is weaker than a constraint of sparsity in the sense of Sect. 4.1, since it concerns a very small number of zones (at least one for each source). The “sparse component analysis method” used hereafter might therefore be called a “quasisparse component analysis method”.
The method used here, called LI2DSpaceCorrNC and proposed by Meganem et al. (2010) (which we just call SpaceCorr hereafter), is based on correlation parameters and has the following structure:

The detection stage finds the single source zones.

The estimation stage identifies the columns of the mixing matrix corresponding to these single source zones.

The combination stage recovers the sources from the estimated mixing matrix and the observations.
Before detailing these steps, some assumptions and definitions are to be specified. The spectral cube is divided into small spatial zones (typically 5 × 5 pixels), denoted Z. These zones consist of adjacent pixels and the spectral cube is scanned spatially using adjacent or overlapping zones. We denote X(Z) the matrix of observed spectra in Z (each column of X(Z) contains an observed spectrum).
First of all, as explained in Sect. 3.2, preprocessing is necessary to minimize the impact of noise on the results. For this particular method, we must keep the spatial data consistency. The aforementioned spatial thresholding is achieved by retaining only zones Z whose power is greater than a threshold. Typically some zones contain only noise and are therefore unnecessary for the spectra estimation step (detection and estimation stages of SpaceCorr). As for the NMF, we set the threshold to .
Definition 1. A source is “active” in an analysis zone Z if its mean power is zero in Z.
Definition 2. A source is “isolated” in an analysis zone Z if only this source is active in Z.
Definition 3. A source is “accessible” in the representation domain if at least one analysis zone Z where it is isolated exists.
Assumption 1. Each source is spatially accessible.
If the data satisfy this spatial sparsity assumption, then we can achieve the decomposition as follows:
4.2.1. Detection stage
From expression (3) of X and considering a singlesource zone Z where only the source s_{ℓ0} is present, the observed signals become restricted to:
where x_{v} is the vth row of X and s_{ℓ0} the ℓ_{0}th row of S. We note that all the observed signals x_{v} in Z are proportional to each other. They all contain the same source s_{ℓ0} weighted by a different factor a_{vℓ0} for each observation whatever the considered velocity v. Thus, to detect the singlesource zones, the considered approach consists of using the correlation coefficients in order to quantify the observed signals proportionality. Let R{x_{i}, x_{j}}(Z) denote the centered crosscorrelation of the two observations x_{i} and x_{j} in Z:
where Card(Z) is the number of samples (i.e., pixels) in Z. On each analysis zone Z, we estimate the centered correlation coefficients ρ{x_{i}, x_{j}}(Z) between all pairs of observations:
We note that these coefficients are undefined if all sources are equal to zero. So we add the following condition:
Assumption 2. On each analysis zone Z, at least one source is active.
For each zone Z we obtain a correlation matrix ρ. In Deville (2014), the authors show that for linearly independent sources, a necessary and sufficient condition for a source to be isolated in a zone Z is:
To measure the singlesource quality q_{Z} of an analysis zone, the matrix ρ is aggregated by calculating the mean , over i and j indices, with i < j. The best singlesource zones are the zones where the quality coefficient q_{Z} is the highest. To ensure the detection of singlesource zones, the coefficient q_{Z} must be less than 1 for multisource zones. We then set the following constraint:
Assumption 3. Over each analysis zone, all active sources are linearly independent if at least two active sources exist in this zone.
The detection stage therefore consists in keeping the zones for which the quality coefficient is above a threshold defined by the user.
4.2.2. Estimation stage
Successively considering each previously selected singlesource zone, the correlation parameters R{x_{i}, x_{j}}(Z) between pairs of bands allow one to estimate a column of the mixing matrix A up to a scale factor:
The choice of the observed signal of index 1 as a reference is arbitrary: it can be replaced by any other observation. In practice, the observation with the greatest power will be chosen as the reference in order to limit the risk of using a highly noisy signal as the reference.
Moreover, to avoid any division by zero, we assume that:
Assumption 4. All mixing coefficient a_{1ℓ} are zero.
As for MCNMF, the scale factor of the estimated spectrum is then compensated for, by normalizing each estimated spectrum so that ∫a_{ℓ}(v) dv = 1. We thus obtain a set of potential columns of . We apply clustering (Kmeans with a correlation criterion) to these best columns in order to regroup the estimates corresponding to the same column of the mixing matrix in L clusters. The mean of each cluster is retained to form a column of the matrix .
4.2.3. Combination stage
The source matrix estimation step is identical to that used for the NMF method (see previous section). It is performed by minimizing the cost function (17) with a nonnegative least square algorithm.
The efficiency of SpaceCorr significantly depends on the size of the analysis zones Z. Too little zones do not allow one to reliably evaluate the correlation parameter ρ{x_{i}, x_{j}}(Z), hence to reliably evaluate the singlesource quality of the zones. Conversely, too large zones do not ensure the presence of singlesource zones. Furthermore, the size of the zones must be compatible with the data. A large number of source signals or a low number of pixels in the data can jeopardize the presence of singlesource zones for each source.
Thus, it is necessary to relax the sparsity condition in order to separate such data. The size of these singlesource zones being a limiting factor, we suggest to reduce them to a minimum, i.e., to one pixel: we assume that there exists at least one singlesource pixel per source in the data. To exploit this property, we developed a geometric BSS method called MASS (for maximum angle source separation; Boulais et al. 2015), which applies to data that do not meet the SpaceCorr assumptions. We note however that MASS does not make SpaceCorr obsolete. SpaceCore generally yields better results than MASS for data with singlesource zones. This will be detailed in Sect. 5.3 devoted to experimentations.
4.3. Sparse component analysis basd on singlesource pixels
The MASS method (Boulais et al. 2015) is a BSS method based on the geometrical representation of data and a sparsity assumption on sources. For this method, we assume that there are at least one pure pixel per source. The spectrum associated with a pure pixel contains the contribution of only one elementary spectrum. This sparsity assumption is of the same nature as the one introduced for SpaceCorr (i.e., spatial sparsity), but the size of the zones Z is here reduced to a single pixel. Once again, we use the spatial model described in Sect. 2.2. With the terminology introduced in Sect. 4.2 for the SpaceCorr method, we here use the following assumption:
Assumption 1′. For each source, there exist at least one pixel (spatial sample) where this source is isolated (i.e., each source is spatially accessible).
Before detailing the MASS algorithm, we provide a geometrical framework for the BSS problem. Each observed spectrum (each column of X) is represented as an element of the ℝ^{N} vector space:
where x_{m} is a nonnegative linear combination of columns of A. The set of all possible (i.e., not necessarily present in the measured data matrix X) nonnegative combinations x_{*} of the L columns of A is
This defines a simplicial cone whose L edges are spanned by the L column vectors a_{ℓ} of A:
where ℰ_{ℓ} is the ℓth edge of the simplicial cone 𝒞(A). We notice that the simplicial cone 𝒞(A) is a convex hull, each nonnegative linear combination of columns of A is contained within 𝒞(A).
Here, the mixing coefficients and the sources are nonnegative. The observed spectra are therefore contained in the simplicial cone spanned by the column of A, i.e., by the elementary spectra. If we add the abovedefined sparsity assumption (Assumption 1′), the observed data matrix contains at least one pure pixel (i.e., a pixel containing the contribution of a unique column of A) for each source.
The expression of such a pure observed spectrum, where only the source of index ℓ_{0} ∈ {1, …, L} is nonzero, is restricted to:
where a_{ℓ0} is the ℓ_{0}th column of A. Since s_{ℓ0m} is a nonnegative scalar, (26) corresponds to an edge vector of the simplicial cone 𝒞(A) defined (25). Therefore, the edge vectors are actually present in the observed data.
To illustrate these properties, we create a scatter plot of data in three dimensions (Fig. 2). These points are generated from nonnegative linear combinations of 3 sources. On the scatter plot, the blue points represent the mixed data (i.e., the columns of X), the red points represent the generators of data (i.e., the columns of A). As previously mentioned, the observations x_{m} are contained in the simplicial cone spanned by the columns of the mixing matrix A. Moreover, if the red points are among the observed vectors (i.e., if Assumption 1′ is verified), the simplicial cone spanned by A is the same as the simplicial cone spanned by X.
Fig. 2.
Scatter plot of mixed data and edges ℰ_{ℓ} of the simplicial cone in the threedimensional case. The columns of X are shown in blue and those of A in red. 
From these properties, we develop the MASS method, which aims to unmix the hyperspectral data. It operates in two stages. The first one is the identification of the mixing matrix A and the second one is the reconstruction of source matrix S. If the data satisfy the spatial sparsity assumption, then we can achieve the decomposition as follows:
4.3.1. Mixing matrix identification
Identifying the columns of the matrix A (up to scale indeterminacies) is equivalent to identifying each edge vector of the simplicial cone 𝒞(A) spanned by the data matrix X. The observed vectors being nonnegative, the identification of the edge vectors reduces to identifying the observed vectors which are furthest apart in the angular sense.
First of all, the columns of X are normalized to unit length (i.e. ∥x_{m}∥ = 1) to simplify the following equations. The identification algorithm operates in L − 1 steps. The first step identifies two columns of by selecting the two columns of X that have the largest angle. We denote x_{m1} and x_{m2} this pair of observed spectra. We have:
Moreover, the cos^{−1} function being monotonically decreasing on [0, 1], Eq. (27) can be simplified to:
We denote the submatrix of formed by these two columns:
The next step consists of identifying the column which has the largest angle with x_{m1} and x_{m2}. This column is defined as the one which is furthest in the angular sense from its orthogonal projection on the simplicial cone spanned by x_{m1} and x_{m2}. Let be the projection of columns of X on the simplicial cone spanned by the columns of :
To find the column of X which is the furthest from its projection, we proceed in the same way as to identify the first two columns. Let m_{3} be the index of this column:
where π_{i} is the ith column of . The new estimate of the mixing matrix is then . This projection and identification procedure is then repeated to identify the L columns of the mixing matrix. For example, the index m_{4} can be identified by searching the column of X which forms the largest angle with its projection on the simplicial cone spanned by the columns of . Finally, the mixing matrix is completely estimated:
However, this mixing matrix estimation is very sensitive to noise since is constructed directly from observations. In order to make the estimate more robust to noise and to consider the case when several singlesource vectors, relating to the same source, are present in the observed data, we introduce a tolerance margin upon selection of the columns. Instead of selecting the column that has the largest angle with its projection (or both first columns which are furthest apart), we select all columns which are nearly collinear to the identified column. For each column x_{mℓ} previously identified according to Eq. (32), we construct the set 𝒜_{ℓ}:
where κ is the tolerance threshold of an inner product (thus included in [0, 1]). It must be chosen close to 1 to avoid selecting mixed observations (typically κ = 0.99). The column of the new mixing matrix is obtained by averaging the columns in each set 𝒜_{ℓ}, which reduces the influence of noise:
where is the average column of the set 𝒜_{ℓ}. Thus, we obtain an estimate of the mixing matrix A up to permutation and scale factor indeterminacies.
4.3.2. Source matrix reconstruction
The source matrix estimation step is identical to those used for the NMF and SpaceCorr methods. It is performed by minimizing the cost function (17) with a nonnegative least square algorithm.
4.4. Hybrid methods
The BSS methods presented above have advantages and drawbacks. NMF and its extended version, MCNMF, are attractive because they explicitly request only the nonnegativity of the considered data (as opposed, e.g., to sparsity). However, without additional assumptions, they e.g. do not provide a unique decomposition, as mentioned above. The SpaceCorr method is influenced by the degree of spatial sparsity present in the data. Indeed, in practice, the assumption of perfectly singlesource zones (q_{Z} = 1) may not be realistic. In such conditions, the zones Z retained for the unmixing are contaminated by the presence, small but not negligible, of other sources. However, SpaceCorr provides a unique decomposition and the algorithm does not require initialization. MASS then allows one to reduce the required size of singlesource zones to a single pixel, but possibly at the expense of a higher sensitivity to noise.
In order to take advantage of the benefits and reduce the drawbacks specific to each of these methods, we hereafter combine them. The spectra and abundance maps estimated with SpaceCorr may not be perfectly unmixed, i.e. elementary, but provide a good approximation of the actual components. To improve the decomposition, these approximations are then refined by initializing MCNMF with these estimates of elementary spectra or abundance maps from SpaceCorr (the choice of or initialized in this way will be discussed in Sect. 5.3). Thus the starting point of MCNMF is close to a global minimum of the objective function, which reduces the possibility for MCNMF to converge to a local minimum. The variability of results is greatly reduced, which leads to lowamplitude error bars.
Thus we obtain two new, hybrid, methods: MCNMF initialized with the spectra obtained from SpaceCorr, which we call SCNMFSpec, and MCNMF initialized with the abundance maps obtained from SpaceCorr, which we call SCNMFMap.
Similarly, two other new hybrid methods are obtained by using the MASS method, instead of SpaceCorr, to initialize MCNMF: initializing MCNMF with the spectra obtained with MASS yields the MASSNMFSpec method, whereas initializing MCNMF with the maps obtained with MASS yields the MASSNMFMap method.
5. Experimental results
5.1. Synthetic data
To evaluate the performance of all considered methods, we generate data cubes containing 2, 4 or 6 elementary spectra (Fig. 3) which have 300 samples. The spectra are simulated using Gaussian functions (that integrate to one) with same standard deviation σ_{Spec} (cases with different standard deviations were also studied and yield similar results). To obtain the different elementary spectra of a mixture, we vary the mean of the Gaussian functions. Thus we simulate the DopplerShift specific to each source.
Fig. 3.
From left to right: two, four, or six elementary spectra used to create the synthetic data. 
The spatial abundance maps are simulated using 2D Gaussian functions, each map having the same standard deviation σ_{Map} on the x and y axes. For each 2D Gaussian, we define its influence zone as the pixel locations between its peak and a distance of 3σ_{Map}. Beyond this distance, the corresponding spatial contributions will be assumed to be negligible. To add spatial sparsity, we vary the spatial position of each 2D Gaussian to get more or less overlap between them (see Fig. 4). The distance d between two peaks is varied from 6σ_{Map} down to 2σ_{Map} with a 1σ_{Map} step. The extreme case 2σ_{Map} still yields singlesource zones to meet the assumptions of SpaceCorr. Thus we build 5 different mixtures of the same sources, each involving more or less sparsity.
Fig. 4.
Spatial positions of different 2D Gaussian functions for four sources. The left map shows the case without overlap (d = 6σ_{Map}) and the right map shows the case with maximum overlap (d = 2σ_{Map}). The intermediate cases (d = 5σ_{Map}, d = 4σ_{Map} and d = 3σ_{Map}) are not represented. 
Moreover, to ensure the assumption of linear independence of sources (i.e., abundance maps from the point of view of SpaceCorr), each map is slightly disturbed by a uniform multiplicative noise. Thus the symmetry of synthetic scenes does not introduce linear relationships between the different maps. Finally, we add white Gaussian noise to each cube, to get a signal to noise ratio (SNR) of 10, 20 or 30 dB, unless otherwise stated (see in particular Appendix C.8 where the case of low SNRs is considered).
It is important to note, finally, that the objective here is not to produce a realistic simulated astrophysical scene or simulated dataset, but rather to have synthetic data that fulfill the statistical properties listed in Sect. 2.3, and in which we can vary simple parameters to test their effect on the performances of the method. We also note that Guilloteau et al. (2019) are currently developing a model of scene and instruments to provide a realistic synthetic JWST hyperspectral datasets, and the present method could be tested on these upcoming data.
5.2. Estimation of number of sources
We tested the method used to estimate the number of sources (Sect. 3.1) on our 45 synthetic data cubes. For each of them, we found the true number of sources of the mixture. These results are unambiguous because the difference between λ_{(L)} and λ_{(L + 1)} clearly appears. We can easily differenciate the two parts Ω_{S} and Ω_{E} on each curve of ordered eigenvalues. We illustrate the method for a mixture of 4 sources in Fig. 5. A “break” is clearly observed in the curve of ordered eigenvalues at index r = 5. The number of sources identified by the method is correct, L = r − 1 = 4.
Fig. 5.
Example of identification of number of sources for a synthetic mixture of four sources with SNR = 10 dB and d = 2σ_{Map}. 
5.3. Unmixing
5.3.1. Quality measures
We now present the performance of the different BSS methods introduced in Sect. 4: MCNMF, SpaceCorr, MASS and their hybrid versions. To study the behavior of these methods, we apply them to the 45 synthetic cubes. We use two measures of error as performance criteria, one for maps and the other for spectra. The Normalized Root Mean Square Error (NRMSE) defines the error of estimated maps:
The spectral angle mapper (SAM) normalized root mean square error defines the error of estimated spectra. This usual measurement in hyperspectral imaging for Earth observation is defined as the angle formed by two spectra:
The Monte Carlo analysis associated with the NMF makes it possible to define the spread of the solutions given by each of the K runs of the NMF. For each estimated source, we construct the envelope giving the spread of the solutions around the most probable solution according to (16). The amplitude of the envelope is normalized by the maximum intensity in order to obtain the error bars as a percentage of the maximum intensity. This normalization is arbitrary and makes it possible to express the spread of the MCNMF independently from the spectral intensity. We first denote as NMCEB_{ℓ} (for Normalized Monte Carlo Error Bar) the normalized error associated with the ℓth elementary spectrum:
where is the maximal intensity of the ℓth elementary spectrum. To quantify the total spread of MCNMF solutions for a data cube, the above parameter is then maximized along the spectral axis:
For clarity, we hereafter detail two examples of mixtures of four sources with SNR = 20 dB. The results for other mixtures with an SNR of 10, 20, or 30 dB lead to the same conclusions and are available in Appendix C. More specifically, some additional tests with a very low SNR (1, 3, or 5 dB) are also reported in Appendix C.8 for the preferred two methods. Their relative merits are then modified as expected, as compared to the above cases involving significantly higher SNRs.
5.3.2. Results
The first example is a case of a highly sparse mixture (d = 6σ_{Map}), whose map is shown in the leftmost part of Fig. 4. The second example concerns a weakly sparse mixture (d = 2σ_{Map}), whose map is shown in the rightmost part of Fig. 4. Again to simplify the figures, we present only one component but the results are similar for the remaining 3 components.
The results in the most sparse case are given in Figs. A.1 and A.2. The first figure illustrates the performance of the MCNMF, SpaceCorr or MASS methods used alone and the second figure shows the results of the resulting four hybrid methods. Similarly, the results in the least sparse case are given in Figs. A.3 and A.4.
In the most sparse case, the results of MCNMF are consistent with the previously highlighted drawbacks. On the one hand, we note a variability of results which leads to significant error bars. On the other hand, the estimated spectrum yields a significant error. This corresponds to an overestimation of the maximum intensity of the spectrum and an underestimation of the width of the beam. Furthermore, we observe the residual presence of close sources, visible on the map of Fig. A.1a.
The SpaceCorr and MASS methods provided excellent results (see Figs. A.1b–c), which is consistent with the theory. This first case is in the optimal conditions of use of the method, since many adjacent observed pixels are singlesource.
Regarding hybrid methods, we observe a significant reduction of the error bars in agreement with the objective of these methods. However when MCNMF is initialized with the previously estimated spectra (see Figs. A.2a–c), we find on the estimated spectra the same inaccuracy as with MCNMF used alone (overestimation of the maximum intensity and underestimation of the width of the beam). Initialization with previously estimated abundance maps gives the best performance, with very similar results for the two algorithms based on this approach (Figs. A.2b–d). For the SCNMFMap and MASSNMFMap methods, there is performance improvement, as compared respectively with SpaceCorr and MASS used alone, although the latter two methods are already excellent.
In the least sparse case, MCNMF provides estimated spectra which have almost the same accuracy as in the most sparse case (see Fig. A.3a). We observe the same deformation of the estimated beam, a large spread of the solutions and a residual source on the abundance map.
This time, SpaceCorr does not provide satisfactory results. Indeed, abundance maps seem to give a good approximation of ground truth but estimated spectra are contaminated by the presence of the other spectral components (see Fig. A.3b). This contamination leads to an underestimation of the peak of intensity, the loss of the symmetry of the beam as well as a positioning error for the maximum of intensity on the spectral axis. This perturbation is explained by the fact that there are few singlesource zones in the cube. Furthermore, the detection step is sensitive to the fixed threshold for selection of the best singlesource zones. Depending on the choice of the threshold, some “quasisinglesource” zones may turn out to be used to estimate the columns of the mixing matrix A.
In this case, the MASS method yields a better estimate than SpaceCorr (see Fig. A.3c), thanks to its ability to operate with singlesource pixels, instead of complete singlesource spatial zones. The obtained spatial source is correctly located and is circular (unlike with the SpaceCorr method, where it was slightly deformed). The estimated spectrum is better than that estimated by SpaceCorr, however it is slightly noisy because of the sensitivity of MASS to the high noise level (see Appendix C).
Here again, all four hybrid methods significantly reduce the error bars, as compared with applying MCNMF alone. Initializations with SpaceCorr results (Figs. A.4a–b) improve the results of SpaceCorr without completely removing the residue of other spectral components (i.e., the estimated spectrum is still somewhat asymmetric). In addition, we observe again that when MCNMF is initialized with the spectra (Figs. A.4a–c), we obtain the estimated spectra with the same inaccuracy as with the MCNMF used alone. The initialization with MASS results (Figs. A.4c–d) improves the results of MASS by removing the residual noise of the estimated spectrum. As an overall result, the initialization of MCNMF with the abundance maps provided by MASS (see Fig. A.4d, including a 6.08% NRSME and a 0.034 rad SAM) gives the best performance in this difficult case of weakly sparse and highly noisy data.
5.3.3. Summary of the results
To conclude on the synthetic tests, we group in Table 2 the performances obtained by the different methods for the cube containing four sources with an SNR of 20 dB. The reported performance values are averaged over all four components and 100 noise realizations.
Performance obtained with the considered methods for a cube containing 4 sources with a 20 dB SNR.
First, the four hybrid methods presented here highly improve the spread of the solutions given by MCNMF used alone. This point is the first interest to use hybrid methods.
With regard to the decomposition quality achieved by the SCNMF and MASSNMF hybrid methods, the synthetic data tests show that the Map initialization provides better results than the “Spec” one in almost all cases, whatever the considered source sparsity. Therefore, as an overall result, Map initialization is the preferred option. The only exception to that trend is that SCNMFSpec yields a better result than SCNMFMap in terms of SAM for lowsparsity sources.
The last point to be specified in these tests is the choice of the method used to initialize MCNMF with the “Map” initialization selected above, namely SpaceCorr or MASS. The results of Table 2 refine those derived above from Figs. A.1–A.4, with a much better statistical confidence, because they are here averaged over all sources and 100 noise realizations, instead of considering only one source and one noise realization in the above figures. The results of Table 2, in terms of NRSME and SAM, thus show that SCNMFMap yields slightly better performance than MASSNMFMap in the most sparse case, whereas MASSNMFMap provides significantly better performance in the least sparse case. This result is not surprising, because SCNMFMap sets more stringent constraints on source sparsity, but is expected to yield somewhat better performance when these constraints are made (thanks to the data averaging that it peforms over analysis zones). In a “blind configuration”, i.e. when the degree of sparsity of the sources is not known, the preferred method for the overall set of data considered here is MASSNMFMap because, as compared with SCNMFMap, it is globally better in the sense that it may yield significantly better or only slightly worse performance, depending on sparsity.
It should be noted at this stage that we suspect that what favors MASSNMFMap is related to the dimension of the data cube, i.e. that this latter method performs better here because there are, in the synthetic data, more points spatially than spectrally (i.e. 10^{4} spatial points vs 300 spectral points). Hence, by initializing with SCA results the matrix that contains the largest number of points, MASSNMFMap provides a better solution. On the contrary, in the recent study by Foschino et al. (2019), the authors have found that MASSNMFSpec performs better. In their specific case case, the (real) data contain only 31 spatial positions and 6799 spectral points. This suggests that the general recommendation is to use the version of the method (Spec or Map) that initializes the largest number of points in the NMF with SCA results.
6. Conclusion and future work
In this paper, we proposed different versions of Blind Source Separation methods for astronomical hyperspectral data. Our approach was to combine two wellknown classes of methods, namely NMF and Sparse SCA, in order to leverage their respective advantages while compensating their disadvantages. We developed several hybrid methods based on this principle, depending on the considered SCA algorithm (SpaceCorr or MASS) and depending whether that SCA algorithm is used to set the initial values of spectra or abundances then updated by our MonteCarlo version of NMF, called MCNMF. In particular, our MASSNMFMap hybrid method, based on initializing MCNMF with the abundance maps provided by MASS, yields a quasiunique solution to the decomposition of a synthetic hyperspectral data cube, with an average error (summarized in Table 2) which is always better, and often much better, than that of the MCNMF, SpaceCorr and MASS methods used separately. Our various tests on simulated data also show robustness to additive white noise. Since the initialization of NMF with SCA methods was here shown to yield encouraging results, our future work will especially aim at developing SCA methods with lower sparsity constraints, in order to further extend the domain where the resulting hybrid SCANMF methods apply. A first application of the MASSNMFSpec method presented in this paper on real data is presented in Foschino et al. (2019) and shows the potential of such methods for current and future hyperspectral datasets.
References
 Bally, J., Lanber, W. D., Stark, A. A., & Wilson, R. W. 1987, ApJ, 312, L45 [Google Scholar]
 Berné, O., Joblin, C., Deville, Y., et al. 2007, A&A, 469, 575 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Berné, O., Joblin, C., Deville, Y., et al. 2012, in SF2A2012: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, eds. S. Boissier, P. de Laverny, N. Nardetto, et al., 507 [Google Scholar]
 Boulais, A., Deville, Y., & Berné, O. 2015, IEEE International Workshop ECMSM [Google Scholar]
 Cardoso, J.F. 1998, Proc. IEEE, 86, 2009 [Google Scholar]
 Cesarsky, D., Lequeux, J., Abergel, A., et al. 1996, A&A, 315, L305 [NASA ADS] [Google Scholar]
 Cichocki, A., Zdunek, R., Phan, A., & Amari, S. I. 2009, Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multiway Data Analysis and Blind Source Separation (John Wiley and Sons) [Google Scholar]
 Deville, Y. 2014, Blind Source Separation: Advances in Theory, Algorithms and Applications. Chapter 6, Sparse Component Analysis: a General Framework for Linear and Nonlinear Blind Source Separation and Mixture Identification (Springer), 151 [Google Scholar]
 Deville, Y., & Puigt, M. 2007, Signal Proc., 87, 374 [Google Scholar]
 Deville, Y., Revel, C., Briottet, X., & Achard, V. 2014, IEEE Whispers [Google Scholar]
 Donoho, D., & Stodden, V. 2003, When Does NonNegative Matrix Factorization Give Correct Decomposition into Parts? (MIT Press), 2004 [Google Scholar]
 Falgarone, E., Pety, J., & HilyBlant, P. 2009, A&A, 507, 355 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Foschino, S., Berné, O., & Joblin, C. 2019, A&A, 632, A84 [EDP Sciences] [Google Scholar]
 Goicoechea, J. R., Pety, J., Cuadrado, S., et al. 2016, Nature, 537, 207 [Google Scholar]
 Gratier, P., Bron, E., Gerin, M., et al. 2017, A&A, 599, A100 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Gribonval, R., & Lesage, S. 2006, in ESANN’06 proceedings  14th European Symposium on Artificial Neural Networks (Bruges, Belgium: dside publi.), 323 [Google Scholar]
 Guilloteau, C., Oberlin, T., Berné, O., & Dobigeon, N. 2019, AJ, 160, 28 [Google Scholar]
 Habart, E., Dartois, E., Abergel, A., et al. 2010, A&A, 518, L116 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Hyvärinen, A., Karhunen, J., & Oja, E. 2001, Independent Component Analysis (Wileyinterscience), 26 [Google Scholar]
 Joblin, C., Pilleri, P., Montillaud, J., et al. 2010, A&A, 521, L25 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Juvela, M., Lehtinen, K., & Paatero, P. 1996, MNRAS, 280, 616 [NASA ADS] [Google Scholar]
 Kristensen, L. E., van Dishoeck, E. F., Tafalla, M., et al. 2011, A&A, 531, L1 [Google Scholar]
 Langville, A., Meyer, C., & Albright, R. 2006, Initializations for the Nonnegative Matrix Factorization [Google Scholar]
 Lawson, C. 1974, Solving Least Squares Problems (Classics) [Google Scholar]
 Lee, D. D., & Seung, H. S. 1999, Nature, 401, 788 [Google Scholar]
 Lee, D. D., & Seung, H. S. 2001, Algorithms for Nonnegative Matrix Factorization, NIPS (MIT Press), 556 [Google Scholar]
 Luo, J., & Zhang, Z. 2000, Signal Proc. Proc., 1, 223 [Google Scholar]
 Meganem, I., Deville, Y., & Puigt, M. 2010, in Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, 1334 [Google Scholar]
 Miao, L., & Qi, H. 2007, IEEE Trans. Geosci. Remote Sens., 45, 765 [Google Scholar]
 Miesch, M. S., & Bally, J. 1994, ApJ, 429, 645 [Google Scholar]
 Neufeld, D. A., Hollenbach, D. J., Kaufman, M. J., et al. 2007, ApJ, 664, 890 [Google Scholar]
 Paatero, P., & Tapper, U. 1994, Environmetrics, 5, 111 [Google Scholar]
 Pabst, C. H. M., Goicoechea, J. R., Teyssier, D., et al. 2017, A&A, 606, A29 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Pilleri, P., Montillaud, J., Berné, O., & Joblin, C. 2012, A&A, 542, A69 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Silverman, B. 1998, Density Estimation for Statistics and Data Analysis (Chapman and Hall/CRC) [Google Scholar]
 Theodoridis, S., & Koutroumbas, K. 2009, Pattern Recognition (Academic Press) [Google Scholar]
 Van Kempen, T. A., Kristensen, L. E., Herczeg, G. J., et al. 2010, A&A, 518, L121 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Weilbacher, P. M., MonrealIbero, A., Kollatschny, W., et al. 2015, A&A, 582, A114 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
 Werner, M. W., Uchida, K. I., Sellgren, K., et al. 2004, ApJS, 154, 309 [Google Scholar]
Appendix A: Additional figures
Fig. A.1.
Results of the decomposition for the MCNMF, SpaceCorr, and MASS methods in the most sparse case (d = 6σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions of MCNMF. Each subfigure caption contains the name of the considered BSS method, followed by the NRMSE of the estimated abundance map and the SAM of the estimated spectrum (see Eqs. (35) and (36)). This also applies to the subsequent figures. (a) MCNMF (20.81%, 0.107 rad). (b) SpaceCorr (2.69%, 0.019 rad). (c) MASS (2.07%, 0.019 rad). 
Fig. A.2.
Results of the decomposition for hybrid methods in the most sparse case (d = 6σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions. (a) SCNMFSpec (21.47%, 0.102 rad). (b) SCNMFMap (2.13%, 0.016 rad). (c) MASSNMFSpec (21.32%, 0.101 rad). (d) MASSNMFMap (1.95%, 0.016 rad). 
Fig. A.3.
Results of the decomposition for the MCNMF, SpaceCorr and MASS methods in the least sparse case (d = 2σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions of MCNMF. (a) MCNMF (23.63%, 0.126 rad). (b) SpaceCorr (19.67%, 0.122 rad). (c) MASS (8.84%, 0.051 rad). 
Fig. A.4.
Results of the decomposition for hybrid methods in the least sparse case (d = 2σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes and red error bars give the spread of the solutions. (a) SCNMFSpec (16.20%, 0.082 rad). (b) SCNMFMap (10.37%, 0.065 rad). (c) MASSNMFSpec (16.69%, 0.080 rad). 
Appendix B: Data processing methods
B.1. Kmeans method
Kmeans is a standard unsupervised classification method (Theodoridis & Koutroumbas 2009) which aims to partition a set of N vectors x into k sets {σ_{1}, σ_{2}, …, σ_{k}}. Within each set σ_{i}, a measure of dissimilarity d between vectors x ∈ σ_{i} and cluster representative c_{i} is minimized. The cluster representative c_{i} (or centroid) is the mean vector in the cluster σ_{i}. In our case, the measure of dissimilarity is 1 minus the correlation between x ∈ σ_{i} and c_{i}:
Performing clustering then amounts to minimizing the following objective function:
where C is the matrix of centroids c_{i} and U is a partition matrix such that U_{ij} = 1 if x_{j} ∈ σ_{i} and 0 otherwise. Moreover, each vector belongs to a single cluster (hard clustering):
The Kmeans algorithm is defined as follows:
However, a wellknown drawback of this algorithm is its sensitivity to the initialization, the final centroids depending on the choice of initial cluster representatives c_{i}. But for our two clustering steps, a manual initialization is possible and avoids this drawback:

As part of MonteCarlo analysis of NMF (Sect. 4.1), we use as initial centroids C_{0} the L spectra estimated by the first MonteCarlo trial.

As part of the estimation step of SpaceCorr (Sect. 4.2), we use as initial centroids C_{0} the L farthest spectra in the sense of the measure d (see Eq. (B.1)), among the potential columns of the matrix .
B.2. Parzen kernel method
The estimation by Parzen kernel or Parzen windows (Theodoridis & Koutroumbas 2009; Silverman 1998) is a parametric method for estimating the probability density function (pdf) of a random variable at any point of its support Ω. Let (x_{1}, x_{2}, …, x_{N}) be an independent and identically distributed sample of a random variable X with an unknown pdf f. Its kernel density estimator is:
where K is a kernel, i.e., a smooth nonnegative function that integrates to one, and h > 0 is a smoothing parameter called the bandwidth (or width of window). In our case, we use as the kernel the standard normalized Gaussian function (zero expectation and unit standard deviation):
The bandwidth h of the kernel is a free parameter which exhibits a strong influence on the resulting estimate. In the case of a Gaussian kernel, it can be shown (Silverman 1998) that the optimal choice for h is:
where is the sample standard deviation. The Parzen kernel algorithm is defined as follows:
Appendix C: Additional test results
The next figures show the performance of the methods applied to all cubes. The SAM and the NRMSE values presented in the following figures are obtained by averaging these measurements for each method over all 100 noise realizations. We also provide error bars which define the standard deviation over all noise realizations. Each such NMCEB spread measurement is obtained in two stages. Firstly, for each source, the average spectrum and the average envelope are defined over all noise realizations. In a second step, we identify the maximum, first along the spectral axis (n) for a given source (ℓ), and then with respect to all sources.
To summarize, we associate with each method a first figure composed of six frames arranged in two rows and three columns. The first row gives the SAM and the second the NRMSE. The first column concerns cubes containing 2 sources, the middle one containing 4 sources and the last one containing 6 sources. Within each frame, the x axis gives the distance d defining the overlap between the spatial sources, from d = 6σ_{Map} down to d = 2σ_{Map}. In Appendices C.1–C.7, inside each such frame, the three curves respectively correspond to each of the three main noise levels tested in this study: blue for 30 dB Signal to Noise Ratio (SNR), red for 20 dB and black for 10 dB. Similarly, in Appendix C.8, inside each frame, the three curves respectively correspond to each of the three additional tested low SNRs: blue for 5 dB, red for 3 dB and black for 1 dB. For the MCNMF method and the hybrid methods, we also provide a second figure, consisting of 3 frames giving the NMCEB maximum for each of the 45 cubes.
In conclusion, all of these tests and illustrations aim to quantify several phenomena. First, evaluate the impact of the noise level and the distance between the sources on the performance of the methods. Next, quantify the contribution of hybrid methods compared to the MCNMF, SpaceCorr and MASS methods used alone. The study of the error bars of the MonteCarlo analysis associated with the NMF makes it possible to evaluate the spread of the solutions as a function of the noise level, the sparsity of the sources and the initialization.
C.1. Results with MCNMF
The performance of the MCNMF method is shown in Figs. C.1 and C.2. We will first consider the cases involving four or six sources. The performance of MCNMF then has a low sensitivity to the distance d, i.e. to the source joint sparsity. Similarly, the number of sources is a criterion having a relatively limited effect on the performance of MCNMF in our simulations. On the contrary, the noise level has a significant effect on the quality of the solutions given by MCNMF (although the noiseless case does not give ideal results). It should be noted that the amplitude of the error bars of MCNMF depends on all the tested parameters (degree of sparsity, number of sources and noise level). The variability of MCNMF solutions is often substantial and the goal of hybrid methods is to reduce this variability.
Fig. C.1.
Performances achieved by MCNMF on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.2.
Spread (NMCEB) of the solutions of MCNMF obtained on the 45 synthetic cubes with 100 realizations of noise. 
We will now focus on the case of 2 sources. For distances ranging from d = 6σ_{Map} down to d = 3σ_{Map} (or at least 4σ_{Map}), the same comments as those provided above for 4 or 6 sources still apply, as expected. On the contrary, further decreasing d to d = 2σ_{Map} results in an unexpected behavior: the SAM and NRMSE values then strongly decrease. In other words, in this situation when the sources become more mixed, performance improves. Several causes can lead to this result, such as the presence of noise, the symmetry of the maps, the great similarity of the spectra or the number of iterations of the NMF, although all these features also apply to the cubes containing 4 or 6 sources. To analyze the influence of such features on the shape of the cost function to be minimized and thus on performance, we performed the following additional tests for 2 sources:

No noise in the mixtures.

Deleting the normalization of the spectra at each iteration of the NMF.

Random switching of columns of X.

Avoiding the symmetry of abundance maps by deforming a 2D Gaussian.

Avoiding the similarity of the spectra by changing the nature of the function simulating the line (triangle or square function).

Fixed number of iterations of the NMF.

Deleting the observed spectra having a power lower than 90% of the maximal observed power.
Each of these tests led to the same, unexpected, trend as that observed in the first column of Fig. C.1. On the contrary, the expected behavior was observed in the following additional test, where the sparsity of the sources was varied. The abundance maps simulated by 2D Gaussians are replaced by a matrix S of dimension 2 × 100 whose elements are drawn with a uniform distribution between 0.5 and 1. The first 25 elements of the first row are multiplied by a coefficient α. The elements 26 to 50 of the second row are also multiplied by the same coefficient α. Thus, depending on the value of α, we simulate more or less sparsity in the data. In this test, we observe a decrease of the performance of MCNMF when α increases, i.e. when the source sparsity decreases, as expected. Finally, we performed tests for all the cubes (as well as for the hybrid methods), where we further decreased the distance d to d = 1σ_{Map}, which corresponds to very low sparsity. The SAM and NMRSE then considerably increase, which corresponds to the expected behavior of BSS methods. The abovedefined unexpected behavior is therefore only observed in the specific case involving 2 sources and the intermediate distance d = 2σ_{Map}, and it could be further analyzed in future work.
C.2. Results with SpaceCorr
The performance of the SpaceCorr method is shown in Fig. C.3. This method gives excellent results if the data are sparse enough, i.e., if there are a sufficient number of singlesource zones, which here corresponds to a large enough distance d. We also note that SpaceCorr is not very sensitive to the number of sources. Its (limited) sensitivity is at least due to the fact that the number of sources over the considered fixed spatial area may have an influence on the degree of source sparsity in terms of available singlesource zones. Finally, we emphasize that SpaceCorr is relatively robust to noise in data. The presence of residuals in the estimates of the least sparse cases is due to the small number of singlesource zones per source in the data. In addition, the step of detection of the singlesource zones is sensitive to the choice of the threshold used to select the best singlesource zones. Depending on this threshold, almost singlesource zones may be used to estimate the columns of the mixing matrix, which yields contributions from other sources. However, the sensitivity of the method to the choice of this parameter will be attenuated by hybridization with MCNMF.
Fig. C.3.
Performances achieved by SpaceCorr on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
C.3. Results with SCNMFSpec
Whatever the number of sources, the SCNMFSpec hybrid version (Fig. C.4) yields results with the same trend as that obtained by MCNMF used alone for two sources (Fig. C.1, leftmost column). Moreover, when considering two sources for both methods, they yield similar estimation errors for given sparsity and noise level. Besides, SCNMFSpec results in a much lower spread (Fig. C.5) than MCNMF (Fig. C.2), especially for four or six sources. This property is the main goal of hybridization.
Fig. C.4.
Performances achieved by SCNMFSpec on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.5.
Spread (NMCEB) of the solutions of SCNMFSpec obtained on the 45 synthetic cubes with 100 realizations of noise. 
C.4. Results with SCNMFMap
The SCNMFMap hybrid version (Fig. C.6) yields results with the same trend as that obtained by SpaceCorr used alone (Fig. C.3), whatever the number of sources. Moreover, SCNMFMap results in significantly lower estimation errors, especially in difficult cases. The spread of the results given by SCNMFMap (Fig. C.7) becomes negligible, with an improvement of an order of magnitude over the amplitude of the error bars of the SCNMFSpec hybrid version.
Fig. C.6.
Performances achieved by SCNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.7.
Spread (NMCEB) of the solutions of SCNMFMap obtained on the 45 synthetic cubes with 100 realizations of noise. 
C.5. Results with MASS
MASS gives excellent results in the case of sufficiently sparse data and in the presence of a reasonable noise level (20 or 30 dB SNR). It is further noted that for mixtures of two or four sources, the results are correct even under the least favorable conditions of sparsity, again with an SNR of 20 or 30 dB.
The main disadvantage highlighted by the synthetic data tests is the sensitivity of the method to the high level of noise. Indeed, the performances for the mixtures with a 10 dB SNR are weak, even under optimal conditions of sparsity. This sensitivity comes from the structure of the method that performs the estimation of the mixing matrix by directly selecting the columns from the observed vectors. The introduction of a tolerance angle in this selection has an effect at reasonable noise levels but becomes less effective at higher noise levels. In addition, reducing the tolerance threshold would allow greater robustness to noise, to the detriment of component separation. Non singlesource observations would then be used in the estimation of the columns of the mixing matrix. The sensitivity of MASS to the noise level can be mitigated by hybridization with MCNMF.
C.6. Results with MASSNMFSpec
The MASSNMFSpec hybrid version (Fig. C.9) yields the same trends as those obtained by SCNMFSpec (Fig. C.4). In comparison with the MASS method alone, the main advantage is an overall improvement of the performance criteria for mixtures with 10 dB SNR. The MASS method used alone in this configuration gave unsatisfactory results. They are markedly improved during hybridization with MCNMF. Finally, we note the major decrease in the spread of the solutions given by MASSNMFSpec (Fig. C.10) compared to that encountered with the MCNMF (Fig. C.2), especially for four or six sources. The error bars encountered here are of the same order of magnitude as those obtained with SCNMFSpec (Fig. C.5).
C.7. Results with MASSNMFMap
The MASSNMFMap hybrid version (Fig. C.11) yields the same trends as those obtained by MASS used alone (Fig. C.8). Again, there is a noticeable improvement in performance for mixtures with a 10 dB SNR compared to MASS used alone, as well as a clear improvement in the spread of the solutions (Fig. C.12) compared to that obtained with the MCNMF method alone (Fig. C.2), especially for 4 or 6 sources.
Fig. C.8.
Performances achieved by MASS on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.9.
Performances achieved by MASSNMFSpec on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.10.
Spread (NMCEB) of the solutions of MASSNMFSpec obtained on the 45 synthetic cubes with 100 realizations of noise. 
Fig. C.11.
Performances achieved by MASSNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.12.
Spread (NMCEB) of the solutions of MASSNMFMap obtained on the 45 synthetic cubes with 100 realizations of noise. 
C.8. Results for very low signaltonoise ratios
Finally, we performed additional tests for the two methods which appeared to be the most attractive ones in Sect. 5.3.3, namely MASSNMFMap and SCNMFMap. These tests aim at further analyzing the behavior of these preferred methods for very low SNRs, that is five, three, and one dB. The results thus obtained are shown in Figs. C.13 and C.14. This shows that SCNMFMap here yields significantly better performance than MASSNMFMap, as opposed to the results obtained in Sect. 5.3.3 for significantly higher SNRs. This difference is reasonable and coherent with the comments that we provided in Appendix C.5: although SCNMFMap is constraining in terms of sparsity requirements, it has the advantage of averaging the data over an analysis zone (instead of using a single point in the basic version of MASSbased methods), which reduces its sensitivity to noise. For very noisy data, such as those considered here, this feature is of utmost importance, so that SCNMFMap yields better performance than MASSNMFMap.
Fig. C.13.
Performances achieved by SCNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 5 dB (in blue), 3 dB (in red), and 1 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
Fig. C.14.
Performances achieved by MASSNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 5 dB (in blue), 3 dB (in red), and 1 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 
C.9. Results for asymmetric scenes
Here we test the effect of breaking the symmetry of the spatial scene presented in Fig. 4. We do these by two means, (1) we decrease the size of some sources, (2) we displace one of the sources outside the square grid considered so far. These two cases are illustrated in Fig. C.15. We have considered other variations as well (e.g. smaller or larger displacement). Overall, the analysis of the results shows that the methods provide results with similar performances as for the standard cases described in the core of the paper. Hence, the symmetry of the scene does not appear to be critical, as long as the important hypotheses required by the methods (positivity and sparsity) are verified.
Fig. C.15.
Two cases of symmetry breaking for the maps: reduction of the size of two spatial sources (left) and displacement of one source (right). 
All Tables
Performance obtained with the considered methods for a cube containing 4 sources with a 20 dB SNR.
All Figures
Fig. 1.
Probability density function f_{ωℓ, n} of intensities of the set ω_{ℓ} at a given velocity v. 

In the text 
Fig. 2.
Scatter plot of mixed data and edges ℰ_{ℓ} of the simplicial cone in the threedimensional case. The columns of X are shown in blue and those of A in red. 

In the text 
Fig. 3.
From left to right: two, four, or six elementary spectra used to create the synthetic data. 

In the text 
Fig. 4.
Spatial positions of different 2D Gaussian functions for four sources. The left map shows the case without overlap (d = 6σ_{Map}) and the right map shows the case with maximum overlap (d = 2σ_{Map}). The intermediate cases (d = 5σ_{Map}, d = 4σ_{Map} and d = 3σ_{Map}) are not represented. 

In the text 
Fig. 5.
Example of identification of number of sources for a synthetic mixture of four sources with SNR = 10 dB and d = 2σ_{Map}. 

In the text 
Fig. A.1.
Results of the decomposition for the MCNMF, SpaceCorr, and MASS methods in the most sparse case (d = 6σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions of MCNMF. Each subfigure caption contains the name of the considered BSS method, followed by the NRMSE of the estimated abundance map and the SAM of the estimated spectrum (see Eqs. (35) and (36)). This also applies to the subsequent figures. (a) MCNMF (20.81%, 0.107 rad). (b) SpaceCorr (2.69%, 0.019 rad). (c) MASS (2.07%, 0.019 rad). 

In the text 
Fig. A.2.
Results of the decomposition for hybrid methods in the most sparse case (d = 6σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions. (a) SCNMFSpec (21.47%, 0.102 rad). (b) SCNMFMap (2.13%, 0.016 rad). (c) MASSNMFSpec (21.32%, 0.101 rad). (d) MASSNMFMap (1.95%, 0.016 rad). 

In the text 
Fig. A.3.
Results of the decomposition for the MCNMF, SpaceCorr and MASS methods in the least sparse case (d = 2σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes, and red error bars give the spread of the solutions of MCNMF. (a) MCNMF (23.63%, 0.126 rad). (b) SpaceCorr (19.67%, 0.122 rad). (c) MASS (8.84%, 0.051 rad). 

In the text 
Fig. A.4.
Results of the decomposition for hybrid methods in the least sparse case (d = 2σ_{Map}). Estimated spectrum is in blue, actual spectrum is in black dashes and red error bars give the spread of the solutions. (a) SCNMFSpec (16.20%, 0.082 rad). (b) SCNMFMap (10.37%, 0.065 rad). (c) MASSNMFSpec (16.69%, 0.080 rad). 

In the text 
Fig. C.1.
Performances achieved by MCNMF on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.2.
Spread (NMCEB) of the solutions of MCNMF obtained on the 45 synthetic cubes with 100 realizations of noise. 

In the text 
Fig. C.3.
Performances achieved by SpaceCorr on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.4.
Performances achieved by SCNMFSpec on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.5.
Spread (NMCEB) of the solutions of SCNMFSpec obtained on the 45 synthetic cubes with 100 realizations of noise. 

In the text 
Fig. C.6.
Performances achieved by SCNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.7.
Spread (NMCEB) of the solutions of SCNMFMap obtained on the 45 synthetic cubes with 100 realizations of noise. 

In the text 
Fig. C.8.
Performances achieved by MASS on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.9.
Performances achieved by MASSNMFSpec on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.10.
Spread (NMCEB) of the solutions of MASSNMFSpec obtained on the 45 synthetic cubes with 100 realizations of noise. 

In the text 
Fig. C.11.
Performances achieved by MASSNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 30 dB (in blue), 20 dB (in red), and 10 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.12.
Spread (NMCEB) of the solutions of MASSNMFMap obtained on the 45 synthetic cubes with 100 realizations of noise. 

In the text 
Fig. C.13.
Performances achieved by SCNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 5 dB (in blue), 3 dB (in red), and 1 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.14.
Performances achieved by MASSNMFMap on the 45 synthetic cubes for 100 realizations of noise with an SNR of 5 dB (in blue), 3 dB (in red), and 1 dB (in black). The error bars give the standard deviation over the 100 realizations of noise. 

In the text 
Fig. C.15.
Two cases of symmetry breaking for the maps: reduction of the size of two spatial sources (left) and displacement of one source (right). 

In the text 
Current usage metrics show cumulative count of Article Views (fulltext article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 4896 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.