ChatGPT: the AI standard-but not always the right fit?
Since late 2022, ChatGPT has popularized the use of artificial intelligence. Whether for personal or professional use, many of us have explored its capabilities in our daily tasks: text generation, rephrasing, summarizing...
But this rise has raised a key question for businesses: Can ChatGPT truly be used in a professional setting?
Some companies answered swiftly: no. In 2023, 82% of French companies were considering banning ChatGPT from professional devices, according to a BlackBerry study.
Today, it's becoming increasingly difficult for organizations to ignore the generative AI wave. Yet, doubts remain:
Should ChatGPT be allowed? Under what conditions? And if not, what secure alternatives exist?
What companies really expect from an AI tool?
The initial “wow” effect is no longer enough. In recent years, many organizations have taken the time to clarify their expectations when it comes to deploying AI solutions.
Here are their top concerns:
- Data security
- Adaptation to business-specific contexts
- Avoiding hallucinations and unreliable outputs
- Integration into existing tools and workflows
- Delivering real, measurable value
The limitations of ChatGPT in the workplace
ChatGPT blazed a trail and still stands among the most powerful tools in terms of speed, ease of use, and versatility.
However, more and more professionals are reporting its limitations:
Hallucination Risks
Always eager to provide an answer—but at what cost?
It’s now widely known that AI tools, including ChatGPT, can hallucinate: fabricating facts, numbers, or even historical data.
This issue is no longer anecdotal. In October 2024, OpenAI's SimpleQA benchmark found that no tested model exceeded 50% accuracy. Another study estimated that ChatGPT made errors in 51% of programming-related answers sourced from Stack Overflow.
This represents a significant risk in enterprise environments. An employee acting on unchecked ChatGPT output could make costly mistakes—especially in industries like healthcare, manufacturing, or defense.
Generic Responses
Without a well-structured, specific prompt, ChatGPT draws from the entirety of its training data—in short, the entire internet.
While this broad knowledge can be useful, it becomes problematic when answering domain-specific or context-sensitive questions, especially those involving internal acronyms or company-specific terminology.
Lack of Human Interaction
ChatGPT is fast and self-serve—but also isolating. Users are left alone with their screen, with no easy way to:
- Verify the accuracy of answers (especially without subject-matter expertise)
- Get help when the response is irrelevant or incomplete
This lack of human touch poses three main problems:
- No safeguards or oversight to ensure factual accuracy
- No space for peer collaboration—a key driver of innovation
- No access to tacit knowledge that isn’t documented
Data Security and Sovereignty
Developed by U.S.-based OpenAI, ChatGPT raises concerns among European companies that prioritize local data control.
Due to the U.S. Cloud Act, data processed by ChatGPT could theoretically be accessed by U.S. authorities—raising red flags for sectors like defense, healthcare, and energy.
There are also persistent questions about compliance, especially in light of recent GDPR violations.
What Are the Real Alternatives to ChatGPT?
The AI market is expanding rapidly. Several alternatives now exist:
- Mistral AI and Le Chat: A French-made AI model
- Industry-specific or use-case-based solutions now integrating AI
- Custom internal solutions using open-source or proprietary models
- SaaS tools that wrap large models like GPT in secure, controlled environments
Ask for the Moon’s Approach: AI Built for Critical Expertise
Reliability: Assistants That Say “I Don’t Know”
In the industrial world, reliability trumps speed. There’s no room for hallucinated answers.
Our Assistants are constrained to respond only using vetted sources selected by in-house experts.
If no valid answer is found, they say “I don’t know”—rather than generate misleading or vague content.
“As an industrial company, we cannot afford vague or inaccurate answers. Ask for the Moon provided us with new assistants that impressed us with their precision and relevance compared to generic AIs. We fast-tracked deployment to meet our reliability standards.”
— Olivier Duhoux, National Technical Support Manager, DEF
Precision: Hyper-Specialized Assistants
Ask for the Moon’s AI Assistants are built by and for internal subject-matter experts.
They draw only from authorized sources, not an entire document repository or the public web.
Example:
If you ask about a decontamination calculation method, our Assistant will reference the official ASN guide selected by your internal radiation safety lead—not a forgotten internship report on SharePoint, or an outdated blog post from the web.
Humans at the Core
Our Assistants are built collaboratively with internal experts to ensure human validation of all knowledge used.
But we go further:
Believing in the power of peer interaction, our Assistants are designed to connect employees when the AI doesn’t have an answer. These exchanges are then fed back into the AI to improve future responses.
Security and Sovereignty
Ask for the Moon is a French company with European investors, certified ISO 27001, and trusted by industry leaders like Airbus and Framatome.
We offer flexible data hosting options, tailored to each organization’s needs.
Simple Deployment
- No code
- No complex IT integration
- No model training required
An expert can be trained and build their first Assistant in under 2 hours.
In conclusion, ChatGPT opened the door. But companies are now looking for more tailored, secure, and business-aligned solutions.
The market is evolving—and contextual, specialized AI is taking center stage.
🚀 Exploring alternatives to ChatGPT? Discover Ask for the Moon’s AI Assistants, built by and for your internal experts.