Ollama, a library that allows large-scale language models such as Llama 2 to run locally, now supports AMD graphics cards



Ollama is a library that allows you to run large-scale language models (LLMs) such as Llama 2, Mistral, Vicuna, and LLaVA locally with relative ease. Ollama now supports AMD graphics cards.

Ollama now supports AMD graphics cards · Ollama Blog
https://ollama.com/blog/amd-preview

Ollama is a library available for Windows, macOS, and Linux, and an official Docker image is also distributed. The following article provides detailed instructions on how to install Docker on Debian, start the official Ollama Docker image, and run LLM.

Official Docker image of 'Ollama', an application that allows various chat AIs to be easily run in a local environment, has been released - GIGAZINE



Ollama supports both CPU and GPU processing modes. While GPU processing was previously only supported on NVIDIA graphics cards, it was announced on March 14, 2024 that it will also support AMD graphics cards. The following graphics cards are supported at the time of writing:

◆AMD Radeon RX series
AMD Radeon RX 7900 XTX
AMD Radeon RX 7900 XT
AMD Radeon RX 7900 GRE
AMD Radeon RX 7800 XT
AMD Radeon RX 7700 XT
AMD Radeon RX 7600 XT
AMD Radeon RX 7600
AMD Radeon RX 6950 XT
AMD Radeon RX 6900 XTX
AMD Radeon RX 6900 XT
AMD Radeon RX 6800 XT
AMD Radeon RX 6800
・AMD Radeon RX Vega 64
・AMD Radeon RX Vega 56

◆AMD Radeon PRO series
・AMD Radeon PRO W7900
・AMD Radeon PRO W7800
・AMD Radeon PRO W7700
・AMD Radeon PRO W7600
・AMD Radeon PRO W7500
・AMD Radeon PRO W6900X
・AMD Radeon PRO W6800X Duo
・AMD Radeon PRO W6800X
・AMD Radeon PRO W6800
・AMD Radeon PRO V620
・AMD Radeon PRO V420
・AMD Radeon PRO V340
・AMD Radeon PRO V320
・AMD Radeon PRO Vega II Duo
・AMD Radeon PRO Vega II
・AMD Radeon PRO VII
・AMD Radeon PRO SSG

◆AMD Instinct series
・AMD Instinct MI300X
・AMD Instinct MI300A
AMD Instinct MI300
・AMD Instinct MI250X
AMD Instinct MI250
AMD Instinct MI210
AMD Instinct MI200
・AMD Instinct MI100
AMD Instinct MI60
・AMD Instinct MI50

The number of compatible graphics cards is expected to increase in the future.

The Ollama source code is available at the following link:

ollama/ollama: Get up and running with Llama 2, Mistral, Gemma, and other large language models.
https://github.com/ollama/ollama



◆ Forum is currently open
A forum related to this article has been set up on the official GIGAZINE Discord server . Anyone can post freely, so please feel free to comment! If you don't have a Discord account, please refer to the account creation instructions article to create one!

• Discord | 'What are your key points for AI model release news? Will it run on my PC? How powerful is it? Who is developing it? What's important?' | GIGAZINE
https://discord.com/channels/1037961069903216680/1219572135463354399

in AI,   Hardware,   Software, Posted by log1o_hf