Publications

Most of the information presented on this page have been retrieved from RePEc with the kind authorization of Christian Zimmermann
Work organization in social enterprises: A source of job satisfaction?Journal articleXavier Joutard, Francesca Petrella and Nadine Richez-Battesti, Kyklos, Volume 78, Issue 1, pp. 111-148, 2025

Many studies suggest that employees of social enterprises experience greater job satisfaction than employees of for-profit organizations, although their pay and employment contracts are usually less favorable. Based on linked employer–employee data from a French survey on employment characteristics and industrial relations and using a decomposition method developed by Gelbach (2016), this paper aims to explain this somewhat paradoxical result. Focusing on work organization variables, we show that the specific work organization of social enterprises explains a large part of the observed job satisfaction differential both in general and more specifically, in terms of satisfaction with access to training and working conditions. By detailing the components of work organization, the higher job satisfaction reported by employees in social enterprises stems from their greater autonomy and better access to information. In contrast to earlier studies, however, our results show that these work organization variables do not have more value for social enterprise employees than for for-profit organization employees in the case of overall job satisfaction. This result casts doubt on the widespread hypothesis that social enterprise employees attach more weight to the nonmonetary advantages of their work than their counterparts in for-profit organizations.

Specific Sensitivity to Rare and Extreme Events: Quasi-Complete Black Swan Avoidance vs Partial Jackpot Seeking in Rat Decision-MakingJournal articleMickael Degoulet, Louis-Matis Willem, Christelle Baunez, Stéphane Luchini and Patrick A. Pintus, eLife, Volume 13, Forthcoming

Most studies assessing animal decision-making under risk rely on probabilities that are typically larger than 10%. To study Decision-Making in uncertain conditions, we explore a novel experimental and modelling approach that aims at measuring the extent to which rats are sensitive - and how they respond - to outcomes that are both rare (probabilities smaller than 1%) and extreme in their consequences (deviations larger than 10 times the standard error). In a four-armed bandit task, stochastic gains (sugar pellets) and losses (time-out punishments) are such that extremely large - but rare - outcomes materialize or not depending on the chosen options. All rats feature both limited diversification, mixing two options out of four, and sensitivity to rare and extreme outcomes despite their infrequent occurrence, by combining options with avoidance of extreme losses (Black Swans) and exposure to extreme gains (Jackpots). Notably, this sensitivity turns out to be one-sided for the main phenotype in our sample: it features a quasi-complete avoidance of Black Swans, so as to escape extreme losses almost completely, which contrasts with an exposure to Jackpots that is partial only. The flip side of observed choices is that they entail smaller gains and larger losses in the frequent domain compared to alternatives. We have introduced sensitivity to Black Swans and Jackpots in a new class of augmented Reinforcement Learning models and we have estimated their parameters using observed choices and outcomes for each rat. Adding such specific sensitivity results in a good fit of the selected model - and simulated behaviors that are close - to behavioral observations, whereas a standard Q-Learning model without sensitivity is rejected for almost all rats. This model reproducing the main phenotype suggests that frequent outcomes are treated separately from rare and extreme ones through different weights in Decision-Making.

Financial and Oil Market’s Co-Movements by a Regime-Switching CopulaJournal articleManel Soury, Econometrics, Volume 12, Issue 2, pp. 14, Forthcoming

Over the years, oil prices and financial stock markets have always had a complex relationship. This paper analyzes the interactions and co-movements between the oil market (WTI crude oil) and two major stock markets in Europe and the US (the Euro Stoxx 50 and the SP500) for the period from 1990 to 2023. For that, I use both the time-varying and the Markov copula models. The latter one represents an extension of the former one, where the constant term of the dynamic dependence parameter is driven by a hidden two-state first-order Markov chain. It is also called the dynamic regime-switching (RS) copula model. To estimate the model, I use the inference function for margins (IFM) method together with Kim’s filter for the Markov switching process. The marginals of the returns are modeled by the GARCH and GAS models. Empirical results show that the RS copula model seems adequate to measure and evaluate the time-varying and non-linear dependence structure. Two persistent regimes of high and low dependency have been detected. There was a jump in the co-movements of both pairs during high regimes associated with instability and crises. In addition, the extreme dependence between crude oil and US/European stock markets is time-varying but also asymmetric, as indicated by the SJC copula. The correlation in the lower tail is higher than that in the upper. Hence, oil and stock returns are more closely joined and tend to co-move more closely together in bullish periods than in bearish periods. Finally, the dependence between WTI crude oil and the SP500 stock index seems to be more affected by exogenous shocks and instability than the oil and European stock markets.

Random Informative Advertising with Vertically Differentiated ProductsJournal articleRim Lahmandi-Ayed and Didier Laussel, Games, Volume 15, Issue 2, pp. 10, Forthcoming

We study a simple model in which two vertically differentiated firms compete in prices and mass advertising on an initially uninformed market. Consumers differ in their preference for quality. There is an upper bound on prices since consumers cannot spend more on the good than a fixed amount (say, their income). Depending on this income and on the ratio between the advertising cost and quality differential (relative advertising cost), either there is no equilibrium in pure strategies or there exists one of the following three types: (1) an interior equilibrium, where both firms have positive natural markets and charge prices lower than the consumer’s income; (2) a constrained interior equilibrium, where both firms have positive natural markets, and the high-quality firm charges the consumer’s income or (3) a corner equilibrium, where the low-quality firm has no natural market selling only to uninformed customers. We show that no corner equilibrium exists in which the high-quality firm would have a null natural market. At an equilibrium (whenever there exists one), the high-quality firm always advertises more, charges a higher price and makes a higher profit than the low-quality one. As the relative advertising cost goes to infinity, prices become equal and the advertising intensities converge to zero as well as the profits. Finally, the advertising intensities are, at least globally, increasing with the quality differential. Finally, in all cases, as the advertising parameter cost increases unboundedly, both prices converge increasingly towards the consumer’s income.

Prioritisation of infectious diseases from a public health perspective: a multi-criteria decision analysis study, France, 2024Journal articleDominique Ploin, Mathilde Alexandre, Bruno Ventelou, Didier Che, Bruno Coignard, Nathalie Boulanger, Christophe Burucoa, François Caron, Pierre Gallian, Yves Hansmann, et al., Eurosurveillance, Volume 29, Issue 50, pp. 2400074, Forthcoming

Background Within the International Health Regulations framework, the French High Council for Public Health was mandated in 2022 by health authorities to establish a list of priority infectious diseases for public health, surveillance and research in mainland and overseas France. Aim Our objective was to establish this list. Methods A multi-criteria decision analysis was used, as recommended by the European Centre for Disease Prevention and Control. A list of 95 entities (infectious diseases or groups of these, including the World Health Organization (WHO)-labelled ‘Disease X’) was established by 17 infectious disease experts. Ten criteria were defined to score entities: incidence rate, case fatality rate, potential for emergence and spread, impact on the individual, on society, on socially vulnerable groups, on the healthcare system, and need for new preventive tools, new curative therapies, and surveillance. Each criterion was assigned a relative weight by 77 multidisciplinary experts. For each entity, 98 physicians from various specialties rated each criterion against the entity, using a four-class Likert-type scale; the ratings were converted into numeric values with a nonlinear scale and respectively weighted to calculate the entity score. Results Fifteen entities were ranked as high-priorities, including Disease X and 14 known pathologies (e.g. haemorrhagic fevers, various respiratory viral infections, arboviral infections, multidrug-resistant bacterial infections, invasive meningococcal and pneumococcal diseases, prion diseases, rabies, and tuberculosis). Conclusion The priority entities agreed with those of the WHO in 2023; almost all were currently covered by the French surveillance and alert system. Repeating this analysis periodically would keep the list updated.

Does improving diagnostic accuracy increase artificial intelligence adoption? A public acceptance survey using randomized scenarios of diagnostic methodsJournal articleYulin Hswen, Ismael Rafai, Antoine Lacombe, Bérengère Davin-Casalena, Dimitri Dubois, Thierry Blayac and Bruno Ventelou, Artificial Intelligence in Health, Volume 2, Issue 1, pp. 114-120, Forthcoming

This study examines the acceptance of artificial intelligence (AI)-based diagnostic alternatives compared to traditional biological testing through a randomized scenario experiment in the domain of neurodegenerative diseases (NDs). A total of 3225 pairwise choices of ND risk-prediction tools were offered to participants, with 1482 choices comparing AI with the biological saliva test and 1743 comparing AI+ with the saliva test (with AI+ using digital consumer data, in addition to electronic medical data). Overall, only 36.68% of responses showed preferences for AI/AI+ alternatives. Stratified by AI sensitivity levels, acceptance rates for AI/AI+ were 35.04% at 60% sensitivity and 31.63% at 70% sensitivity, and increased markedly to 48.68% at 95% sensitivity (p <0.01). Similarly, acceptance rates by specificity were 29.68%, 28.18%, and 44.24% at 60%, 70%, and 95% specificity, respectively (P < 0.01). Notably, AI consistently garnered higher acceptance rates (45.82%) than AI+ (28.92%) at comparable sensitivity and specificity levels, except at 60% sensitivity, where no significant difference was observed. These results highlight the nuanced preferences for AI diagnostics, with higher sensitivity and specificity significantly driving acceptance of AI diagnostics.

Asymmetric Reciprocity and the Cyclical Behavior of Wages, Effort, and Job CreationJournal articleMarco Fongoni, American Economic Journal: Macroeconomics, Volume 16, Issue 3, pp. 52-89, Forthcoming

This paper develops a search and matching framework in which workers are characterized by asymmetric reference-dependent reciprocity and firms set wages by considering the effect that these can have on workers' effort and, therefore, on output. The cyclical response of effort to wage changes can considerably amplify shocks, independently of the cyclicality of the hiring wage, which becomes irrelevant for unemployment volatility, and firms' expectations of downward wage rigidity in existing jobs increases the volatility of job creation. The model is consistent with evidence on hiring and incumbents' wage cyclicality, and provides novel predictions on the dynamics of effort.

Multivariate filter methods for feature selection with the γ-metricJournal articleNicolas Ngo, Pierre Michel and Roch Giorgi, BMC Medical Research Methodology, Volume 24, Issue 1, pp. 307, 2024

The $$\gamma$$-metric value is generally used as the importance score of a feature (or a set of features) in a classification context. This study aimed to go further by creating a new methodology for multivariate feature selection for classification, whereby the $$\gamma$$-metric is associated with a specific search direction (and therefore a specific stopping criterion). As three search directions are used, we effectively created three distinct methods.

The consumption-based carbon emissions effects of renewable energy and total factor productivity: The evidence from natural gas exportersJournal articleFakhri J. Hasanov, Rashid Sbia, Dimitrios Papadas and Ioannis Kostakis, Energy Reports, Volume 12, pp. 5974-5989, 2024

This study first time explores the impact of total factor productivity, renewable energy, exports, imports, and income on carbon emissions in the Gas Exporting Countries Forum (GECF) nations. To ensure that the results are sound and policy insights are well-grounded, three main issues of panel data – cross-sectional dependency, heterogeneity, and nonstationarity – are addressed using cutting-edge methods. Moreover, a theoretically justified framework is employed, offering advantages such as considering a broad set of factors, which are actionable from a climate policy perspective, with dual benefits of emissions reduction and supporting clean growth. We find that total factor productivity, renewable energy, and exports reduce carbon emissions, while income and imports have an increasing effect. Policymakers in GECF countries may consider implementing measures to support technological advancements, efficiency improvements, increased use of renewable energy, expanded exports, and lowered imports. They can reduce emissions while promoting sustainable economic growth.

Climate pattern effects on global economic conditionsJournal articleGilles Dufrénot, William Ginn and Marc Pourroy, Economic Modelling, Volume 141, pp. 106920, 2024

This study investigates how El Niño–Southern Oscillation (ENSO) climate patterns affect global economic conditions. Prior research suggests that ENSO phases, particularly El Niño, influence economic outcomes, but with limited consensus on their broader macroeconomic impacts. Using a novel monthly dataset from 20 economies, covering 80% of global output from 1999 to 2022, we employ a global augmented vector autoregression with local projections (GAVARLP) model. The empirical findings suggest that El Niño boosts output with minimal inflationary effects, reducing global economic policy uncertainty, while La Niña raises food inflation, which can amplify aggregate inflation as a “second-round” effect, amplifying uncertainty. These findings shed light on the transmission channels of climate shocks and highlight the significant role of ENSO in shaping global economic conditions, emphasizing why climate shocks should be a concern for policy markers.