C4AI Command R+: Discussions

We enjoy hearing AI enthusiasts compare various models, and today, we're exploring discussions about C4AI Command R+ from CohereForAI 😊.

Model Summary: C4AI Command R+ is a highly advanced 104 billion parameter model by Cohere and Cohere For AI. It excels in automating complex tasks using Retrieval Augmented Generation (RAG) and multi-step tool use. The model supports 10 languages, including English, Spanish, and Chinese, and is optimized for reasoning, summarization, and question answering.

  • Size: 104 billion parameters
  • Multilingual: 10 languages (English, French, Spanish, Italian, German, Brazilian Portuguese, Japanese, Korean, Simplified Chinese, and Arabic.)
  • Uses: Reasoning, summarization, Q&A
  • License: CC-BY-NC (with C4AI's Acceptable Use Policy)

Some users are consistently impressed with the detailed, in-depth, and sensible responses of Command R+. It’s interesting to read that some AI enthusiasts often compare it favorably to the early GPT-4 model for its mature and deep understanding of subjects. The AI community notes that Command R+ excels in managing long contexts, not just picking required quotes but also generalizing over content and understanding nuances. Despite not being specifically trained in Norwegian 🇳🇴, Command R+ intelligently summarizes Norwegian texts, showcasing its depth and text understanding across multiple languages.

Command R+ vs. Llama-3-70B

It's nice when users compare LLMs in their ‘discussion battles,’ as these discussions reveal real nuances and business cases. In recent comparisons between Command R+ and Llama-3-70B, users have shared mixed opinions. Some have tried the Command R+ demo on HuggingFace but found they preferred the output from Llama-3-70B. Some mentioned that their primary use case for Command R+ is when a 100,000 context window is needed. Another opinion is that the Llama-3 70B Q6_K model is fantastic, while a smaller quant of Command R+ tended to ramble and didn't provide the same quality of responses. However, Command R+ Q4 22GB was highlighted as the only locally run LLM that consistently gave good replies to logical puzzles. Despite Llama-3 70B's impressive performance, it falls short in multilingual tasks.

Interestingly, Command R+ excels in business research contexts, producing precise and detailed results based on statistical data. For general inquiries, however, Llama-3 70B Instruct tends to deliver more accurate and informative responses. Testing with the OpenRouter API and direct API showed that Command R+ performed better for business use cases. Yet, Llama-3 70B Instruct is more cost-effective, at $0.80 per million tokens compared to Command R+'s $15 per million tokens. In terms of both pricing and results, many users find Llama-3 to be the better choice.

NSFW and Censorship

When it comes to handling NSFW content and censorship, users have noted significant differences between Command R+ and Llama-3-70B. Llama-3-70B is reported to be extremely censored, making it difficult to get consistent responses for NSFW topics. While some users have found ways to bypass these restrictions, the solutions are often temporary, and subsequent requests might fail. In contrast, Command R+ handles NSFW storytelling without much resistance, providing responses more readily in these contexts. This reduced censorship is seen as a distinct advantage for users requiring such content.

Additionally, Command R+ is less censored compared to GPT-4, which is heavily filtered and often restricted on both API and web versions.

Lack of Emotion

Users have noted that while Command R+ is a very intelligent AI, it often lacks emotional depth. This characteristic makes it seem dry and boring to some, particularly when compared to other 100 billion parameter models like Goliath or Midnight. However, there are positive aspects. Command R+ supports GMS's GML language, which is beneficial for game development, and it has strong multi-language capabilities. Role-playing (RP) or erotic role-playing (ERP) can perform well but requires a lot of tokens and precise setup with system hints.

So, despite being designed for RAG and not naturally role-play friendly, it can become suitable for RP with substantial system prompts. 

License Restrictions

The license for Command R+ restricts its use to non-commercial purposes. This limitation is a significant drawback for users who may create valuable content and wish to monetize it. Unlike models licensed under Apache 2.0 or Meta's more permissive licenses, Command R+ cannot be used commercially, creating a barrier for some users. The restrictive license is seen as a reason for the model's relatively low popularity, as it limits the potential for broader applications and monetization.

However, there's a different perspective. Some users argue that the license still allows for meaningful use, such as working on open-source projects or personal endeavors. You can share brilliant outputs from the model online for others to enjoy. The license permits personal use and research, making it suitable for non-commercial projects. 

Was this helpful?
🌟 Advertise your project 🚀
Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v20241124