LLM News and Articles

184 of 100
Thursday, 2024-07-25
18:16OpenAI announces SearchGPT, its AI-powered search engine
18:15OpenAI Announces SearchGPT
18:04GenAI — Best Practices for Using Vector DB Collections
18:02Generative AI in Nutshell
18:01Why is Llama 3.1 Such a Big deal?
16:45What does the Transformer Architecture Tell Us?
16:43Key Figures Shaping AI LLM Research in 2024
16:17Understanding Meta LLM Compiler Made-Easy
16:02Applying Jamba-Instruct to Long Context Use Cases in Snowflake Cortex AI
16:02Porting LLMs to Local Machines, Physical Intelligence, Software Engineering & AI, and 70% Off ODSC…
16:025 AI Real-World Projects To Set Foot in The Door
15:24Understanding Gemini Configuration Parameters
15:02#33 Is LoRA the Right Alternative to Full Fine-Tuning?
15:01Have You Met… Your Coworker?
14:56Have You Met… Your Coworker?
14:14Introduction to Text-to-Speech Using LLM’s
13:40Top 10 Use Cases of LLMs in Aviation
13:36Jais and Jais-chat
13:07Sam Altman: AI's future must be democratic
13:02Optimizing NLP Models: Fine-Tuning Strategies with MonsterAPI
12:57Why GPT-4o-mini is a Game-Changer for AI and a Much Bigger Deal Than You Thought
12:42Mistral Large 2: the LLM Focused on Multilinguality and Coding
12:35Why Current LLM Evaluations Are Rigged: The Case for Better Benchmarks
12:17Llama 3.1 is available for home AI clusters! Run 8B model with the full context
12:00Mistral Large 2 — OpenAI is sweating now.
11:52Testing the new Mistral Large 2 LLM
11:24Meta ရဲ့ Llama 3.1
11:04Power of DSpy: A Python Library That Will Change How You Analyze Data
11:00This AI Paper Introduces Long-form RobustQA Dataset and RAG-QA Arena for Cross-Domain Evaluation of Retrieval-Augmented Generation Systems
10:43Llama 3.1 405B Inference Service Deployment: Beginner’s Guide
10:31Building enterprise solutions with LLMs
10:31Build a Secure Local RAG Application with Chat History using LangChain, llama3, Qdrant, Redis &…
10:24LangChain — II: Basic Chatbot Unpacked
09:38ModelBox Now Offers Gemma 2 27B Inference
09:37Document Parsing Using Large Language Models — With Code
09:3110 Chatbot Best Practices for Successful Automation
09:27Small Language Models: Efficient Solutions for Next-Gen AI Challenges
09:21Mainframe vs Standalone LLMs — Data Privacy and Limitations
09:14Coding with Llama 3.1, New DeepSeek Coder and Mistral Large
09:09Creating ChatGPT based data analyst: first steps
09:01Unlock Image Creation with OpenAI DALL-E | Create, Edit & Manipulate
08:37Next Token Prediction Task with
08:33LLama 3.1 Zuckerbergs Open Source model from Meta
07:57LLM Agents: Everything You Need to Know
07:52Fine Tuning Meta LLAMA 3 with custom data
07:48Fine-Tuning LLM: Enhancing Efficiency with PEFT
07:33Building a Custom Mixture of Experts Model for our Darija: From Tokenization to Text Generation
07:20Revolutionising Unit Test Generation With Llms
07:19Affordable LLMs: The Smart Choice for Production
07:17Exploring Meta’s LLaMA 3: A Leap Forward in Large Language Models
07:15Top Large Language Models LLMs Courses
07:02LangChain Search AI Agent Using GPT-4o-mini
06:18What is Retrieval Augmented Generation(RAG) and how does it work?
05:32Discovering Gemini Chat Model
05:07The Secret Dials of AI Creativity: Mastering top_p and Temperature
04:40Mistral Large 2
04:13AI Decodes Project 2025 To Expose Its Real Intent
03:36Creating Gradient Palettes: Fine-Tuning granite-7b for Analogous Gradient Palettes
03:34Language Models (LLMs)
03:28Fine-Tuning Llama 3.1 405B on a Single Node
03:20What Ifs in AI and Media: ‘Left, Right & Centre’
01:49Why ChatGPT is a lousy assistant for data engineers
01:39Day 855
01:38Tokens: The Electricity Powering the LLM Revolution
01:24Large Language Models and Their Feasibility for Natural Language Processing
00:24How to Generate Bert Embeddings Faster and More Reliably for large dataset
00:23Affordable Platforms for Fine-Tuning & Training Large Language Models (LLMs)
00:05Understanding Anthropic Claude: The Future of AI Interaction
00:01Unlocking the Power of Knowledge Graphs: Advanced Uses and Integration with LangChain
00:00LAVE: Zero-shot VQA Evaluation on Docmatix with LLMs - Do We Still Need Fine-Tuning?
00:00LAVE: Zero-shot VQA Evaluation on Docmatix with LLMs - Do We Still Need Fine-Tuning?
Wednesday, 2024-07-24
23:00Nvidia AI Proposes ChatQA 2: A Llama3-based Model for Enhanced Long-Context Understanding and RAG Capabilities
22:59OpenAI is set to lose B this year
22:44Data Science Spotlight: Cracking the SQL Interview at Instacart (LLM Edition)
22:30Exploring the Impact of OpenAI on Modern Technology
22:14GenAI — LLM-Based Agents: Architecture, Best Practices, and Frameworks
22:09GenAI — LLM Monitoring and Its Importance
21:43NEW Named Entity Recognition — GLiNER
21:32A Conversation with Meta Llama 3.1-405B
21:27GraphRAG: Unlocking Contextual Understanding
20:55Exploring the Benefits of Chat GPT for Effective Communication
20:47A Visual Guide to Quantization
20:42Llama 3 versus Llama 3.1 License Terms
20:07Llama Llama-3-405B?
20:02Important LLMs Papers for the Week from 15/07 to 21/07
19:39LLM’s for data extraction
19:23Getting Started with LLaMA 3.1
19:12Agents as a MicroService | Beginner Guide to llama-agents
19:09Announcing Llama 3.1: Meta’s Largest and Most Advanced AI Model
19:09Announcing Llama 3.1: Meta’s Largest and Most Advanced AI Model
19:02Meet the Fellow: Shauli Ravfogel
19:00Understanding LLM: The Future of Language Models
18:59iFixit CEO Kyle Wiens calls out Anthropic for disruptive crawling
18:53Effective Methods to Mitigate Hallucinations in Large Language Models (LLMs)
18:44Show HN: Pairing LLM code generation with traditional templates
18:30Unlocking Generative AI Power in Spring Boot Applications
18:17The Guide to Evaluating Retrieval-Augmented Generation (RAG) systems
18:09RAG Retrieval-augmented generation (RAG)
17:57Engenheiro de Dados 3.0
17:49Pushing the Boundaries of Decentralized AI: SolderAI Partners with Netmind
184 of 100
Was this helpful?
Our Social Media →  
Original data from HuggingFace, OpenCompass and various public git repos.
Release v2024072803