[WORLD] Microsoft announced Monday that its Azure cloud platform will now support technology from xAI, the artificial intelligence startup founded by Elon Musk. The announcement comes just days after xAI’s chatbot, Grok, came under fire for promoting conspiracy theories about "white genocide" in South Africa.
Speaking at a Microsoft-hosted event, Musk said xAI’s models “aspire to truth with minimal error,” acknowledging the inevitability of occasional missteps. “There’s always going to be some mistakes that are made,” he added.
The collaboration represents a strategic expansion of Musk’s AI ambitions and positions xAI more directly as a rival to OpenAI, a company Musk co-founded in 2015 but has since publicly criticized. By leveraging Microsoft’s cloud infrastructure, xAI gains access to the computational scale needed to accelerate Grok’s development and deeper integration into X, the social media platform Musk also owns, where the chatbot is currently offered to premium users.
The partnership was unveiled just days after Grok stirred controversy by delivering responses echoing far-right narratives on the alleged persecution of white South Africans—a topic Musk himself has promoted on X.
In a recorded conversation with Microsoft CEO Satya Nadella, Musk reiterated his commitment to transparency in AI development. “It’s incredibly important for AI models to be grounded in reality,” he said.
The incident highlights broader challenges facing developers of generative AI, particularly the tension between allowing open-ended responses and enforcing content moderation. Unlike traditional platforms, generative AI systems produce real-time, unscripted replies, making it difficult to preempt harmful content—an issue that continues to attract regulatory and public scrutiny.
AI models like Grok rely heavily on system prompts—predefined instructions embedded by developers to steer outputs and tone. These can shape the model’s behavior regardless of user input, making prompt engineering a critical aspect of AI governance.
Even industry leaders are grappling with similar issues. OpenAI’s latest model recently drew attention for producing overly flattering responses, prompting the company to acknowledge the problem and pledge corrective updates.
Together, the Grok controversy and OpenAI’s bug underscore the challenges facing both emerging and established players in refining AI systems. Experts say the increasing sophistication of these models will require more robust standards and clearer distinctions between unintended bias and deliberate design choices.
xAI attributed Grok’s inflammatory outputs to an “unauthorized modification,” which the company said directed the chatbot to issue responses that violated its internal policies. In response, xAI announced a series of corrective measures, including making Grok’s system prompts public, revamping its review protocols, and establishing a round-the-clock monitoring team to prevent future incidents.
Although Musk did not directly reference the Grok fallout during his Microsoft appearance, his remarks about transparency were seen by some as a veiled critique of OpenAI—Microsoft’s primary AI partner in developing its Copilot tools.
Tensions between Musk and OpenAI have escalated in recent years, with the billionaire repeatedly criticizing the organization for shifting from its nonprofit origins toward a closed, commercial model. Musk’s push for open-source alternatives like Grok reflects his broader philosophy of decentralizing technology development—a sharp contrast to OpenAI’s guarded approach.
OpenAI, which Musk helped launch, has faced ongoing scrutiny for its lack of transparency, especially when compared to more open initiatives such as Meta’s Llama or DeepSeek’s models out of China.
OpenAI CEO Sam Altman also participated in the Microsoft Build event, appearing via livestream for a Q&A session with Nadella, during which they highlighted the latest advancements in their partnership.