Contenu Menu

2024 Special Issue on Responsible Human-Centered Artifical Intelligence


Special issue co-edited by:
Pierre-Majorique Léger, Ph.D., HEC Montréal
Foutse Khomh, Ph.D., Polytechnique Montréal
Sylvain Sénécal, Ph.D., HEC Montréal

responsible-human-centered-AI“Metaphor for artificial intelligence and human interaction futuristic high res” Human x DALL.E

“AI is the new electricity. Just as electricity transformed almost everything 100 years ago, today I have a hard time thinking of an industry that I don’t think AI will transform in the next several years. But I think it’s important to be mindful about how we bring along various stakeholders in this journey.” – Andrew Ng, AI expert

We are pleased to announce a call for cases focusing on Responsible Human-Centered Artificial Intelligence for publication in the International Journal of Case Studies in Management (IJCSM) (ISSN 1911-2599).

Responsible Human-centered Artificial Intelligence refers to the development and usage of AI technologies which benefit individuals, society, and the environment while minimizing the risk of negative consequences. It encompasses five complementary principles: inclusive growth, sustainable development and well-being; human-centered values and fairness; transparency and explainability; robustness, security and safety; accountability.[1]

While in 2022, 98% of executives stated that they had responsible AI ambitions, only half had action plans.[2] And only 19% stated they had a mature, responsible artificial intelligence (RAI) practice and were reaping benefits from their efforts.[3] There is a lack of consensus on what constitutes RAI. Moreover, organizational factors (such as expertise and talent, training and knowledge and executive prioritization) and the complexity of operationalizing human-centered value principles throughout the AI lifecycle beyond algorithm-level solutions create additional challenges.

This special issue aims to provide the business community with teaching cases that focus on developing and implementing RAI in organizations. We invite academics and practitioners to submit case studies that not only explore the challenges and opportunities associated with the responsible development and implementation of AI, but also provide insights into the actionable strategies that organizations can use to design and implement responsible AI systems.

We welcome submissions on a wide range of topics, including but not limited to:

  • Ethical considerations in developing AI systems
  • Ensuring fairness and reducing bias in AI systems
  • Designing transparent and explainable AI systems
  • Managing the risks associated with AI technologies
  • Evaluating the social and ethical impact of AI
  • Building trust in AI technologies
  • Strategies for implementing responsible AI in organizations
  • Measuring the success of responsible AI initiatives
  • Evidence-based best practices of mature RAI industry leaders
  • RAI implementation journeys from start to current (partial) maturity
  • Holistic practices beyond technological solutions that cover leadership, processes, organizational culture, governance mechanisms, stakeholder engagement, regulatory compliance, etc.
  • Measuring the business benefits of RAI
  • AI-machine learning implementations in a rapidly evolving technology landscape
  • In-house AI development, including 3rd party AI tools

Extended abstracts (750–1,000 words) of proposed case studies must be submitted as Word documents by August 30, 2023 and must be sent to The abstract should include the case title, keywords, a brief outline/narrative, and key learnings. Authors will receive feedback on abstract submissions by October 30, 2023. Completed case studies and teaching notes must be submitted on the IJCSM’s submission platform by December 30, 2023. The final decision will be made in March 2024.

With its 2024 special issue, the IJCSM hopes to shed light on an important topic for organizations: Developing and implementing responsible AI systems. This call for papers seeks compelling case studies accompanied by detailed teaching notes. Authors are invited to read the IJCSM’s definitions and guides before submitting:

When submitting a completed case study, please be sure to write "HCAI - Your case title" in the title and to choose one of the following two options (see image below) when submitting for the special issue. A link to Editorial Manager, our submission system, can be found on our website's Instructions for authors page.


Please note that authors can submit a maximum of 2 cases for this special issue and that cases disclosing the organization names (non-anonymized) will be prioritized.

Important dates:

  • Deadline for abstract submission: August 30, 2023
  • Author feedback on initial submission: no later than October 30, 2023
  • Deadline for submission of case and teaching notes: December 30, 2023
  • Author notification of first round of reviews: January 30, 2024
  • Final decision: March 2024
  • Special issue publication: June 2024.

Thanks in part to a distribution agreement with Harvard Business Publishing, IJCSM cases are used in over 500 universities worldwide.

[1] OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449
[2] PwC’s 2022 AI Business survey, 
[3] MIT Sloan Management Review and Boston Consulting Group, 2022

Logo HEC Montréal

Facebook YouTube Flickr Twitter LinkedIn Instagram
© HEC Montréal, 2024  Tous droits réservés.