Software

AI + ML

Fujitsu picks model-maker Cohere as its partner for the rapid LLM-development dance

Will become exclusive route to market for joint projects


Fujitsu has made a "significant investment" in Toronto-based Cohere, a developer of large language models and associated tech, and will bring the five-year-old startup's wares to the world.

The relationship has four elements, one of which will see the two work on a Japanese-language LLM that's been given the working title Takane. Fujitsu will offer Takane to its Japanese clients. Takane will be based on Cohere's latest LLM, Command R+, which we're told features "enhanced retrieval-augmented generation capabilities to mitigate hallucinations."

The duo will also build models "to serve the needs of global businesses."

The third element of the relationship will see Fujitsu appointed the exclusive provider of jointly developed services. The pair envisage those services as private cloud deployments "to serve organizations in highly regulated industries including financial institutions, the public sector, and R&D units."

The fourth and final element of the deal will see Takane integrated with Fujitsu's generative AI amalgamation technology – a service that selects, and if necessary combines, models to get the best tools for particular jobs.

It's 2024, so no IT services provider can afford not to be developing generative AI assets and partnerships. To do otherwise it to risk missing out on the chance of winning business in the hottest new enterprise workload for years, and thereby forgetting the time-honored enterprise sales tactic of "land and expand." At worst – if things go pear-shaped – they end up as a siloed app that becomes legacy tech and can be milked for years.

This deal is notable, given the likes of OpenAI, Mistral AI, and Anthropic are seen as the LLM market leaders worthy of ring-kissing by global tech players.

By partnering with Canadian Cohere, Fujitsu has taken a different path – and perhaps differentiated itself.

Cohere is not, however, a totally left-field choice. Nvidia and Cisco have invested in the biz, and its models are sufficiently well regarded and in demand that AWS, Microsoft and HuggingFace have all included its wares in their ModelMarts. ®

Send us news
4 Comments

Infosec experts divided on AI's potential to assist red teams

Yes, LLMs can do the heavy lifting. But good luck getting one to give evidence

US bipartisan group publishes laundry list of AI policy requests

Chair Jay Obernolte urges Congress to act – whether it will is another matter

Take a closer look at Nvidia's buy of Run.ai, European Commission told

Campaign groups, non-profit orgs urge action to prevent GPU maker tightening grip on AI industry

AI's rising tide lifts all chips as AMD Instinct, cloudy silicon vie for a slice of Nvidia's pie

Analyst estimates show growing apetite for alternative infrastructure

Million GPU clusters, gigawatts of power – the scale of AI defies logic

It's not just one hyperbolic billionaire – the entire industry is chasing the AI dragon

American cops are using AI to draft police reports, and the ACLU isn't happy

Do we really need to explain why this is a problem?

Google Gemini 2.0 Flash comes out with real-time conversation, image analysis

Chocolate Factory's latest multimodal model aims to power more trusted AI agents

Are you better value for money than AI?

Tech vendors start saying the quiet part out loud – do enterprises really need all that headcount?

Apple called on to ditch AI headline summaries after BBC debacle

'Facts can't be decided by a roll of the dice'

Apple Intelligence summary botches a headline, causing jitters in BBC newsroom

Meanwhile, some iPhone users apathetic about introduction of AI features

We told Post Office about system problems at the highest level, Fujitsu tells Horizon Inquiry

State-owned retail company was not subordinate to Japanese multinational in technical matters, legal rep says

Just how deep is Nvidia's CUDA moat really?

Not as impenetrable as you might think, but still more than Intel or AMD would like