Sun'iy intellekt

Apple M4 chipida 24 GB xotira bilan local AI modellarini ishga tushirish

11-may, 2026, 04:0820 ko'rish5 daqiqa o'qish
Apple M4 chipida 24 GB xotira bilan local AI modellarini ishga tushirish

Sun'iy intellekt (AI) texnologiyalari tobora ko'proq foydalanuvchilar uchun mahalliy (local) qurilmalarda ham ishlashga intilmoqda. Ayniqsa, Apple kompaniyasining yangi M4 chipi 24 GB xotira bilan jihozlangan bo'lsa, bu imkoniyat yanada kengayadi. Ushbu maqolada M4 chipida local AI modellarini ishga tushirishning asosiy bosqichlari, samaradorlikni oshirish usullari va amaliy misollar ko'rib chiqiladi.

M4 chipining asosiy afzalliklari

M4 chipi Apple Silicon arxitekturasiga asoslangan bo'lib, yuqori samaradorlikka ega CPU va GPU yadrolariga ega. 24 GB LPDDR5 xotira esa katta hajmdagi model parametrlarini bir vaqtning o'zida yuklash imkonini beradi. Bu kombinatsiya quyidagi imkoniyatlarni taqdim etadi:

{
  "providers": {
    "lmstudio": {
      "baseUrl": "http://localhost:1234/v1",
      "api": "openai-completions",
      "apiKey": "lm-studio",
      "models": [
        {
          "id": "qwen3.5-9b@q4_k_s",
          "reasoning": true,
          "compat": { "thinkingFormat": "qwen-chat-template" }
        }
      ]
    }
  }
}
  • Yuqori tezlikda inference – GPU yadroli akseleratsiya natijasida matn generatsiyasi va tasvir tanib olish tezligi sezilarli darajada oshadi.
  • Ko'p vazifali ishlash – bir nechta modelni bir vaqtning o'zida ishga tushirish, masalan, LLM (large language model) va embedding modelini birgalikda qo'llash.
  • Energiyani tejash – Apple Silicon chiplari energiya samaradorligi bo'yicha raqobatchilarga nisbatan ustunlikka ega.

Local modelni tanlash va yuklash

Mahalliy modelni tanlashda birinchi navbatda modelning o'lchami va xotira talablari hisobga olinadi. 24 GB xotira bilan quyidagi model turlari qulay:

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "lmstudio": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "LM Studio (local)",
      "options": {
        "baseURL": "http://127.0.0.1:1234/v1"
      },
      "models": {
        "qwen3.5-9b@q4_k_s": {
          "name": "Qwen 3.5 9B Q4_K_S",
          "tools": true,
          "context_length": 131072,
          "max_tokens": 32768
        }
      }
    }
  },
  "model": "lmstudio/qwen3.5-9b@q4_k_s"
}
  • 7B – 13B parametrli LLMlar (masalan, LLaMA 7B, Mistral 7B).
  • Embedding modellar (Sentence‑Transformer, MiniLM).
  • Vizual modellar (CLIP, Stable Diffusion 1.5 – kichikroq versiyalar).

Modelni yuklash uchun huggingface kutubxonasi yoki ggml formatidagi fayllar ishlatiladi. M4 chipida torch va transformers kutubxonalari ARM64 arxitekturasi uchun optimallashtirilgan versiyalarini o'rnatish tavsiya etiladi.

me:
  run mix credo --strict and suggest how to solve it, don't edit

Qwen:
    Credo Warnings
    There are 4 warnings in your test files. The issue is using length/1 to check if lists are non-empty. Credo recommends comparing against an empty list instead.
    Warnings:
    1. test/jola_dev_web/controllers/blog_controller_test.exs:11
       - assert length(conn.assigns.posts) > 0
    2. test/jola_dev_web/controllers/blog_controller_test.exs:22
       - assert length(conn.assigns.posts) > 0
    3. test/jola_dev/blog_test.exs:10
       - assert length(posts) > 0
    4. test/jola_dev/blog_test.exs:23
       - assert length(tags) > 0
    How to Fix
    Replace each length(list) > 0 with list != []:
    blog_controller_test.exs:
    # Line 11
    assert conn.assigns.posts != []
    # Line 22
    assert conn.assigns.posts != []
    blog_test.exs:
    # Line 10
    assert posts != []
    # Line 23
    assert tags != []
    This is more idiomatic in Elixir and avoids the overhead of computing the list length.

Optimallashtirish usullari

Modelning inference samaradorligini oshirish uchun bir necha texnikalar mavjud:

I can see the conflict. Here's what's happening:
HEAD (your current branch):
- sentry: 13.0.1
- tailwind: 0.3.1
Other side (commit being rebased):
- sentry: 12.0.3
- tailwind: 0.4.1
You have divergent dependency bumps. Here are your options:
Option 1: Take both updates (recommended)
- Keep sentry 13.0.1 from HEAD
- Keep tailwind 0.4.1 from the other side
- This gives you the latest of both deps
Option 2: Take only sentry update
- Keep sentry 13.0.1
- Keep tailwind 0.3.1
Option 3: Take only tailwind update
- Keep sentry 12.0.3
- Keep tailwind 0.4.1
Which approach do you want to take?
  • Quantization – modelni 8‑bit yoki 4‑bit formatiga o‘zgartirish, xotira sarfini kamaytiradi va tezlikni oshiradi. bitsandbytes yoki GPTQ kabi kutubxonalar bu jarayonni soddalashtiradi.
  • TensorRT / CoreML – Apple ekotizimida CoreML yordamida modelni konvertatsiya qilish, GPU akseleratsiyasidan maksimal darajada foydalanish imkonini beradi.
  • Batching – bir nechta so'rovni bir vaqtning o'zida qayta ishlash, CPU/GPU yukini balanslaydi.
  • Lazy loading – faqat kerakli qatlamlarni xotiraga yuklash, xotira bosimini kamaytiradi.

Amaliy misol: LLaMA 7B modelini M4 da ishga tushirish

Quyidagi qadamlar orqali LLaMA 7B modelini M4 chipida tezda ishga tushirish mumkin:

> Thinking: The user wants to take both updates - sentry 13.0.1 and tailwind 0.4.1. I need to resolve the conflict by keeping the HEAD version for sentry and the other side version for tailwind. I'll use git add to stage the resolved file.
----
# Stage resolved mix.lock and continue rebase
$ git add mix.lock && git rebase --continue
Vim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal
"~/workspace/mixtli/.git/COMMIT_EDITMSG" 32L, 1047BBump tailwind from 0.3.1 to 0.4.1
Bumps [tailwind](https://github.com/phoenixframework/tailwind) from 0.3.1 to 0.44.1.
- [Changelog](https://github.com/phoenixframework/tailwind/blob/main/CHANGELOG.mmd)
- [Commits](https://github.com/phoenixframework/tailwind/compare/v0.3.1...v0.4.11)
  1. Muvofiq muhitni yaratishconda create -n m4ai python=3.10 va conda activate m4ai.
  2. Kerakli kutubxonalarni o'rnatishpip install torch==2.0.0+cpu torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu, pip install transformers bitsandbytes.
  3. Modelni quantize qilishbitsandbytes yordamida 4‑bit versiyasini yaratish.
  4. CoreML ga konvertatsiyacoremltools.convert funksiyasi bilan modelni .mlmodel formatiga o‘zgartirish.
  5. Inference skriptini yozishtorch yoki coremltools orqali so'rov yuborish va natijani olish.

Natijada, bir nechta so'rovni bir sekund ichida qayta ishlash, xotira sarfi esa 20 GB atrofida bo'ladi.

Xulosa

Apple M4 chipi 24 GB xotira bilan local AI modellarini ishga tushirish uchun qulay platforma hisoblanadi. To‘g‘ri modelni tanlash, quantization va CoreML optimallashtirish usullaridan foydalanish orqali yuqori samaradorlikka erishish mumkin. Bu nafaqat dasturchilar, balki tadqiqotchilar va kichik startaplar uchun ham mustaqil AI yechimlarini yaratishda yangi imkoniyatlar ochadi.

Manba: Hacker News
#AI #Apple M4 #Local models #Machine learning #Inference
Telegram da muhokama qilish