Research/ Publications

“If only my company knew what my company knows.”
– any manager of any company
Today, fragmented knowledge repositories, undefined information strategy and inappropriate use of technologies like AI figure beneath the many stumble-stones to efficient use of company information.
With over 10 years of experience in setting up information retrieval I can help you to
- Define use cases for information retrieval
- Translate use cases into an information architecture strategy
- pilot concept for knowledge assistant
This will translate into:
- Less search time
- Better decision quality
- Faster onboarding
- Lower dependency on tribal knowledge
If you want to get detailed insights on the impact of large language models on the management of documents or BI data please refer to the following papers:
Executive summary
Large language models and AI agents have ignited a surge of expectations for supply chain automation, yet the path from impressive pilots to enterprise value is often blocked by “Data Gravity.” While the hype suggests total replacement of human roles, we can rather expect that ROI is found in the disciplined integration of Large Language Models (LLMs) into complex, structured data environments (ERP/BI).
This paper bridges the gap between promise and practice by outlining critical pillars for SCM leaders:
- Reliable Architectures: Moving beyond conversational interfaces to grounded systems. We explore how to anchor LLMs in deterministic data access (SQL/APIs) and Retrieval-Augmented Generation (RAG) to ensure “process truth.”
- Use cases and best practices for implementing AI agents. These will allow you to embed your initiatives into the strategic context of your company. Beyond reaching break-even with your solutions, you can thus set priorities about data quality and integration.
- The Strategic Evolution (LPM): We introduce the concept of Large Process Models (LPMs)—an emerging framework that combines LLMs with process mining and knowledge graphs to orchestrate end-to-end supply chain intelligence.
- Governance as an Enabler: Shifting the view of governance from a constraint to a control layer that ensures auditability, security, and human-in-the-loop accountability.
Small percentage improvements in inventory, transportation, and planning accuracy translate into millions in financial impact. This document provides a practical landmark for IT strategies, helping managers transition from siloed “MVP-ing” to scalable, agentic SCM solutions.
Executive summary
Around 2015, many organisations tried to fix underperforming intranet searches by bolting on Google‑style search. The resulting “enterprise search” platforms made progress – better user interfaces, federated connectors, enthusiastic pilots – yet never fully met user expectations and often struggled to meet financial efficiency.
Large‑language models (LLMs) enhanced to retrieval‑augmented generation (RAG) promise a fresh start: Conversational answers in any language, intent‑aware ranking and on‑demand summaries. But they also import new challenges – hallucinations, opaque training data, compliance overhead and additional costs.
This paper distills a decade of search lessons and maps them onto the fast‑moving LLM landscape. It concludes that RAG solutions can indeed improve access to unstructured knowledge – if organizations start small, focus on user centric use cases, assign ownership for data quality and treat LLMs as one layer in a disciplined information‑management strategy.
This paper is based on the exchange with different globally acting large companies in the industry sector, that worked on enterprise search at the time.
Follow me on LinkedIn

Frank Giroux
Contact: Frank.giroux.ai@pm.me