Shrestha, Y. R., Ben-Menahem, S. M., & Von Krogh, G. (2019). Organizational Decision-Making Structures in the Age of Artificial Intelligence. California Management Review, 61(4), 66–83. https://doi.org/10.1177/0008125619862257
Summary
In this important article published in California Management Review, Shrestha and colleagues explore how artificial intelligence (AI) is reshaping the way organizations make decisions. The paper compares AI-based decision-making with human judgment along five critical dimensions: the specificity of the problem space, interpretability, size of alternatives, speed, and replicability. Recognizing that neither AI nor human judgment is perfect on its own, the authors propose a framework of three hybrid decision-making structures that integrate both human and AI capabilities.
These structures—full delegation to AI, hybrid sequential models, and aggregated human-AI models—are designed to leverage the strengths of both parties. For instance, AI can rapidly evaluate massive datasets and identify patterns that humans can’t easily detect. On the other hand, humans excel in intuition, moral reasoning, and contextual judgment—especially when information is ambiguous or objectives are poorly defined.
However, the paper warns of potential pitfalls when organizations adopt AI without proper structure or governance. It highlights risks like algorithmic bias, ethical concerns, lack of interpretability (the “black box” problem), and overreliance on automation. It calls for deliberate, thoughtful design of decision-making systems where humans retain accountability and supervise AI output, particularly in high-stakes or ethically sensitive areas.
The article closes by urging leaders to consider contextual factors—like how structured the decision space is, or how important transparency is—when deciding how to balance human and AI input.
Human vs. AI Decision-Making: Different Strengths, Different Limits
At the foundation of the article is a recognition that human decision-making and AI-based decision-making are fundamentally different—not just in terms of tools or techniques, but in how decisions are processed, justified, and executed.
Humans draw on experience, intuition, emotional reasoning, and ethical values. This allows us to deal with ambiguity, make sense of conflicting goals, and reason morally. But humans are also prone to bias, fatigue, inconsistency, and are limited by the amount of data we can process at once.
In contrast, artificial intelligence systems—particularly machine learning and deep learning models—excel at analyzing large datasets, identifying subtle patterns, and producing decisions at scale and speed. AI is consistent, data-driven, and tireless. But it is also limited by its training data and often lacks transparency. Many AI models operate as “black boxes,” meaning they can produce highly accurate results without offering an explanation that humans can easily interpret.
Rather than suggesting one is better than the other, the authors argue that these strengths and weaknesses are complementary. The most effective organizational decisions will come from systems that balance the logic and scale of AI with the judgment and contextual awareness of humans.
Five Dimensions That Distinguish Human and AI Decision-Making
To help organizations design better decision systems, the authors identify five core dimensions where AI and human decision-making differ. These dimensions serve as a framework to determine which tasks should be automated, which should remain human-led, and which require collaboration.
a. Specificity of the Decision Space
AI performs best when decisions have clear objectives, constraints, and measurable outcomes. For example, setting ad prices or identifying fraudulent transactions fits this model. Humans, however, excel when the problem is ambiguous or ill-defined, such as deciding whether to pivot a business model or respond to a PR crisis.
b. Interpretability
Human decision-making is often more transparent and explainable. When a manager explains a choice, others can usually follow the reasoning. In contrast, many AI models—especially deep learning algorithms—are opaque, making it hard to understand how a particular decision was reached. In regulated or high-stakes domains (e.g., hiring, lending, healthcare), interpretability is critical.
c. Size of the Alternative Set
Humans are limited in the number of options we can meaningfully evaluate at once. AI can handle millions of scenarios or combinations, making it ideal for complex optimization problems like logistics, pricing, or resource allocation.
d. Decision-Making Speed
AI makes decisions almost instantaneously, which is invaluable in contexts like high-frequency trading, cybersecurity responses, or dynamic pricing. Humans are significantly slower—though often more thoughtful—especially when consequences are severe.
e. Replicability
AI produces consistent decisions under the same conditions. Human decisions vary due to context, stress, or subjective judgment. This makes AI better suited for tasks where consistency is critical, such as quality control or compliance screening.
Understanding these differences allows organizations to assess which types of decisions are better handled by AI, humans, or a combination of both.
Full Delegation to AI: When Machines Can Take the Lead
The first structure the article outlines is full delegation of decision-making to AI, where machines are empowered to make decisions without human intervention. This model is appropriate when:
- The problem space is clearly defined
- Speed and volume are crucial
- Interpretability is less critical
- The risk of error is manageable or reversible
Examples include:
- Online advertising algorithms that adjust bids in real-time
- Dynamic pricing engines in e-commerce
- Fraud detection systems in banking
In these domains, AI delivers speed, scale, and accuracy that humans simply can’t match. However, there are caveats. If the AI is trained on biased data, it may perpetuate discrimination or make flawed decisions. Furthermore, even when machines are in control, humans remain responsible for monitoring, auditing, and correcting these systems.
The authors stress that full automation should never mean full abdication. Organizations must maintain oversight mechanisms to detect failure modes, assess fairness, and retain accountability—especially in ethically sensitive areas.
Hybrid Sequential Models: Shared Decision-Making in Stages
The second structure is hybrid decision-making, where humans and AI work together in a sequential process. There are two main configurations:
a. AI-to-Human (AI First, Human Final Decision)
In this structure, AI systems filter or recommend a subset of options, which humans then evaluate and finalize. This is common in:
- Recruiting, where AI screens resumes and humans conduct interviews
- Healthcare, where AI scans medical images and doctors confirm diagnoses
- Investment, where AI recommends portfolios and managers make the call
This model combines AI’s ability to process massive amounts of information with the human ability to consider ethical, legal, or relational factors before making final decisions.
b. Human-to-AI (Human First, AI Final Decision)
Here, humans narrow down options, and AI then chooses the best one based on data analysis. This is used in:
- Sports analytics, where coaches define tactics and AI selects player combinations
- Predictive maintenance, where technicians flag potential issues and AI determines optimal service timing
Sequential hybrids offer a good balance of strengths but also create risks of error propagation—flawed human input can mislead AI, or vice versa. To mitigate this, organizations must carefully design workflows and accountability checkpoints.
Aggregated Human–AI Decision-Making: Combining Perspectives
The third model described is aggregated decision-making, where human and AI judgments are combined simultaneously into a final outcome. This can be done through:
- Voting models, where AI and humans each have weighted votes
- Blended scoring, where human and machine scores are averaged
- Multi-source input, where AI and human insights feed into a shared recommendation system
A notable example is Deep Knowledge Ventures (DKV), a Hong Kong-based VC firm that gave its AI system, VITAL, a formal vote on investment decisions. VITAL analyzed scientific and financial data, and its insights were treated equally to those of human partners.
The strength of aggregated systems lies in their ability to balance intuition and data. They reduce the risk of human bias while guarding against AI’s blind spots. However, these models can be complex to implement, and the decision logic may become harder to explain or justify—especially if human and AI inputs conflict.
For aggregated systems to work well, organizations need to:
- Define clear weighting rules
- Ensure transparency in scoring
- Provide mechanisms for conflict resolution
- Regularly evaluate outcomes to recalibrate the balance
10 Key Practical Insights for Business Owners and Managers
- Don’t over-automate—humans must still own decisions.
Even if AI makes the call, humans remain accountable for outcomes. - Use AI for large-scale, fast, repetitive tasks.
Delegate well-defined tasks with massive data, like pricing or fraud detection. - Avoid full delegation for ethical or ambiguous decisions.
In HR, finance, or legal areas, combine AI suggestions with human oversight. - Design hybrid systems where AI supports, not replaces, humans.
Let AI pre-screen options, but keep human judgment in final selection. - Audit AI for bias and fairness.
AI can reproduce and even amplify human biases if not carefully monitored. - Match structure to context.
High-velocity decisions may need full delegation; complex, values-based decisions need hybrid models. - Invest in explainable AI tools.
Transparency builds trust and allows people to challenge or refine AI output. - Avoid “black box” decisions in sensitive areas.
If you can’t explain how a decision was made, it shouldn’t determine someone’s job, loan, or legal outcome. - Use aggregated models for balanced decision-making.
Combine human and AI inputs in strategic, cross-functional teams. - Build internal capabilities to evaluate and govern AI.
Train staff to understand data, question models, and act on ethical concerns.
Closing Thought
As artificial intelligence continues to transform business operations, it’s not just a question of what AI can do—it’s about what it should do. This article urges leaders to move beyond hype and adopt thoughtful, structured approaches to AI integration. The message is clear: AI can enhance decisions, but only when humans design the right framework and retain control over the consequences. Managing this balance will define the competitive edge of future-ready organizations.