Amazon xodimlari AI’dan foydalanuvchi bosimi: vazifalarni o‘zgartirish
Amazon xodimlari AI ishlatish bo‘yicha bosimga duch kelib, vazifalarni sun’iy intellektga moslashtirishga majbur bo‘ladilar.

Sun'iy intellekt (AI) texnologiyalari tobora ko'proq foydalanuvchilar uchun mahalliy (local) qurilmalarda ham ishlashga intilmoqda. Ayniqsa, Apple kompaniyasining yangi M4 chipi 24 GB xotira bilan jihozlangan bo'lsa, bu imkoniyat yanada kengayadi. Ushbu maqolada M4 chipida local AI modellarini ishga tushirishning asosiy bosqichlari, samaradorlikni oshirish usullari va amaliy misollar ko'rib chiqiladi.
M4 chipi Apple Silicon arxitekturasiga asoslangan bo'lib, yuqori samaradorlikka ega CPU va GPU yadrolariga ega. 24 GB LPDDR5 xotira esa katta hajmdagi model parametrlarini bir vaqtning o'zida yuklash imkonini beradi. Bu kombinatsiya quyidagi imkoniyatlarni taqdim etadi:
{
"providers": {
"lmstudio": {
"baseUrl": "http://localhost:1234/v1",
"api": "openai-completions",
"apiKey": "lm-studio",
"models": [
{
"id": "qwen3.5-9b@q4_k_s",
"reasoning": true,
"compat": { "thinkingFormat": "qwen-chat-template" }
}
]
}
}
}
Mahalliy modelni tanlashda birinchi navbatda modelning o'lchami va xotira talablari hisobga olinadi. 24 GB xotira bilan quyidagi model turlari qulay:
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"lmstudio": {
"npm": "@ai-sdk/openai-compatible",
"name": "LM Studio (local)",
"options": {
"baseURL": "http://127.0.0.1:1234/v1"
},
"models": {
"qwen3.5-9b@q4_k_s": {
"name": "Qwen 3.5 9B Q4_K_S",
"tools": true,
"context_length": 131072,
"max_tokens": 32768
}
}
}
},
"model": "lmstudio/qwen3.5-9b@q4_k_s"
}
Modelni yuklash uchun huggingface kutubxonasi yoki ggml formatidagi fayllar ishlatiladi. M4 chipida torch va transformers kutubxonalari ARM64 arxitekturasi uchun optimallashtirilgan versiyalarini o'rnatish tavsiya etiladi.
me:
run mix credo --strict and suggest how to solve it, don't edit
Qwen:
Credo Warnings
There are 4 warnings in your test files. The issue is using length/1 to check if lists are non-empty. Credo recommends comparing against an empty list instead.
Warnings:
1. test/jola_dev_web/controllers/blog_controller_test.exs:11
- assert length(conn.assigns.posts) > 0
2. test/jola_dev_web/controllers/blog_controller_test.exs:22
- assert length(conn.assigns.posts) > 0
3. test/jola_dev/blog_test.exs:10
- assert length(posts) > 0
4. test/jola_dev/blog_test.exs:23
- assert length(tags) > 0
How to Fix
Replace each length(list) > 0 with list != []:
blog_controller_test.exs:
# Line 11
assert conn.assigns.posts != []
# Line 22
assert conn.assigns.posts != []
blog_test.exs:
# Line 10
assert posts != []
# Line 23
assert tags != []
This is more idiomatic in Elixir and avoids the overhead of computing the list length.
Modelning inference samaradorligini oshirish uchun bir necha texnikalar mavjud:
I can see the conflict. Here's what's happening:
HEAD (your current branch):
- sentry: 13.0.1
- tailwind: 0.3.1
Other side (commit being rebased):
- sentry: 12.0.3
- tailwind: 0.4.1
You have divergent dependency bumps. Here are your options:
Option 1: Take both updates (recommended)
- Keep sentry 13.0.1 from HEAD
- Keep tailwind 0.4.1 from the other side
- This gives you the latest of both deps
Option 2: Take only sentry update
- Keep sentry 13.0.1
- Keep tailwind 0.3.1
Option 3: Take only tailwind update
- Keep sentry 12.0.3
- Keep tailwind 0.4.1
Which approach do you want to take?
bitsandbytes yoki GPTQ kabi kutubxonalar bu jarayonni soddalashtiradi.Quyidagi qadamlar orqali LLaMA 7B modelini M4 chipida tezda ishga tushirish mumkin:
> Thinking: The user wants to take both updates - sentry 13.0.1 and tailwind 0.4.1. I need to resolve the conflict by keeping the HEAD version for sentry and the other side version for tailwind. I'll use git add to stage the resolved file.
----
# Stage resolved mix.lock and continue rebase
$ git add mix.lock && git rebase --continue
Vim: Warning: Output is not to a terminal
Vim: Warning: Input is not from a terminal
"~/workspace/mixtli/.git/COMMIT_EDITMSG" 32L, 1047BBump tailwind from 0.3.1 to 0.4.1
Bumps [tailwind](https://github.com/phoenixframework/tailwind) from 0.3.1 to 0.44.1.
- [Changelog](https://github.com/phoenixframework/tailwind/blob/main/CHANGELOG.mmd)
- [Commits](https://github.com/phoenixframework/tailwind/compare/v0.3.1...v0.4.11)
conda create -n m4ai python=3.10 va conda activate m4ai.pip install torch==2.0.0+cpu torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu, pip install transformers bitsandbytes.bitsandbytes yordamida 4‑bit versiyasini yaratish.coremltools.convert funksiyasi bilan modelni .mlmodel formatiga o‘zgartirish.torch yoki coremltools orqali so'rov yuborish va natijani olish.Natijada, bir nechta so'rovni bir sekund ichida qayta ishlash, xotira sarfi esa 20 GB atrofida bo'ladi.
Apple M4 chipi 24 GB xotira bilan local AI modellarini ishga tushirish uchun qulay platforma hisoblanadi. To‘g‘ri modelni tanlash, quantization va CoreML optimallashtirish usullaridan foydalanish orqali yuqori samaradorlikka erishish mumkin. Bu nafaqat dasturchilar, balki tadqiqotchilar va kichik startaplar uchun ham mustaqil AI yechimlarini yaratishda yangi imkoniyatlar ochadi.