Special issue co-edited by:
Pierre-Majorique Léger, Ph.D., HEC Montréal
Foutse Khomh, Ph.D., Polytechnique Montréal
Sylvain Sénécal, Ph.D., HEC Montréal
“Metaphor for artificial intelligence and human interaction futuristic high res” Human x DALL.E
“AI is the new electricity. Just as electricity transformed almost everything 100 years ago, today I have a hard time thinking of an industry that I don’t think AI will transform in the next several years. But I think it’s important to be mindful about how we bring along various stakeholders in this journey.” – Andrew Ng, AI expert
We are pleased to announce a call for cases focusing on Responsible Human-Centered Artificial Intelligence for publication in the International Journal of Case Studies in Management (IJCSM) (ISSN 1911-2599).
Responsible Human-centered Artificial Intelligence refers to the development and usage of AI technologies which benefit individuals, society, and the environment while minimizing the risk of negative consequences. It encompasses five complementary principles: inclusive growth, sustainable development and well-being; human-centered values and fairness; transparency and explainability; robustness, security and safety; accountability.
While in 2022, 98% of executives stated that they had responsible AI ambitions, only half had action plans. And only 19% stated they had a mature, responsible artificial intelligence (RAI) practice and were reaping benefits from their efforts. There is a lack of consensus on what constitutes RAI. Moreover, organizational factors (such as expertise and talent, training and knowledge and executive prioritization) and the complexity of operationalizing human-centered value principles throughout the AI lifecycle beyond algorithm-level solutions create additional challenges.
This special issue aims to provide the business community with teaching cases that focus on developing and implementing RAI in organizations. We invite academics and practitioners to submit case studies that not only explore the challenges and opportunities associated with the responsible development and implementation of AI, but also provide insights into the actionable strategies that organizations can use to design and implement responsible AI systems.
We welcome submissions on a wide range of topics, including but not limited to:
Extended abstracts (750–1,000 words) of proposed case studies must be submitted as Word documents by August 30, 2023 and must be sent to firstname.lastname@example.org. The abstract should include the case title, keywords, a brief outline/narrative, and key learnings. Authors will receive feedback on abstract submissions by October 30, 2023. Completed case studies and teaching notes must be submitted on the IJCSM’s submission platform by December 30, 2023. The final decision will be made in March 2024.
With its 2024 special issue, the IJCSM hopes to shed light on an important topic for organizations: Developing and implementing responsible AI systems. This call for papers seeks compelling case studies accompanied by detailed teaching notes. Authors are invited to read the IJCSM’s definitions and guides before submitting:
When submitting a completed case study, please be sure to write "HCAI - Your case title" in the title and to choose one of the following two options (see image below) when submitting for the special issue. A link to Editorial Manager, our submission system, can be found on our website's Instructions for authors page.
Please note that authors can submit a maximum of 2 cases for this special issue and that cases disclosing the organization names (non-anonymized) will be prioritized.
Thanks in part to a distribution agreement with Harvard Business Publishing, IJCSM cases are used in over 500 universities worldwide.
 OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449
 PwC’s 2022 AI Business survey, https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-business-survey.html#intro-section
 MIT Sloan Management Review and Boston Consulting Group, 2022