Is it uncensored? i would like to use it for ethical hacking
#42 opened 4 days ago
by
ilovetogotomaine
Add a new language: Persian (Farsi)
#41 opened 2 months ago
by
M-sh2025
Finetuning LoRA and Merging
#40 opened 5 months ago
by
JVal123
Assertion Error for Pixtral-12B-2409
β
1
#39 opened 8 months ago
by
tigercao2022
text generation from tokens obtained from image
#38 opened 8 months ago
by
jrtorrez31337
JSON Output Correction
1
#37 opened 10 months ago
by
guidolx
Request: DOI
#36 opened 10 months ago
by
arashinokage
Pixtral Capabilities Regarding Input Bounding Boxes
#35 opened 11 months ago
by
maurovitale
Access to model mistralai/Pixtral-12B-2409 is restricted.
1
#34 opened 11 months ago
by
WANNTING
Fine tuning scripts for pixtral-12b
#33 opened about 1 year ago
by
2U1
Can Model Batch Infer By vLLM
2
#30 opened about 1 year ago
by
BITDDD
Save VLLM model to local disk?
1
#29 opened about 1 year ago
by
narai
OverflowError: out of range integral type conversion attempted
β
6
1
#28 opened about 1 year ago
by
yangqingyou37
The different result with raw model and demo
2
#27 opened about 1 year ago
by
bluebluebluedd
Client Error : Can't load the model (missing config file)
π
1
1
#26 opened about 1 year ago
by
benhachem
Update README.md
#24 opened about 1 year ago
by
narai
not support ollama
1
#23 opened about 1 year ago
by
nilzzz
cannot run model wit VLLM library - missting config.json file
β
3
5
#22 opened about 1 year ago
by
JBod
Add EXL2, INT8, and/or INT4 version of the model, PLEASE!
π₯
7
3
#21 opened about 1 year ago
by
Abdelhak
Cant run the Pixtral example inside readme because of library conflicts
2
#20 opened about 1 year ago
by
Valadaro
cuDNN error: CUDNN_STATUS_INTERNAL_ERROR
1
#19 opened about 1 year ago
by
d3vnu77
Where is the gguf format?
1
#18 opened about 1 year ago
by
RameshRajamani
how many languages supported?
2
#16 opened about 1 year ago
by
xingwang1234
i am trying hf to gguf but there is no config
3
#15 opened about 1 year ago
by
Batubatu
Updated README.md
1
#13 opened about 1 year ago
by
drocks
Updated README.md
#12 opened about 1 year ago
by
riaz
Use local image and quantise the model for low Gpu usage with solution
3
#11 opened about 1 year ago
by
faizan4458
Quantized Versions?
π
13
21
#9 opened about 1 year ago
by
StopLockingDarkmode
Help
1
#8 opened about 1 year ago
by
satvikahuja
Fix llm chat function call in README
#7 opened about 1 year ago
by
ananddtyagi
Passing local images to chat (workaround).
π€
π
21
1
#6 opened about 1 year ago
by
averoo
MLX / MPS users out of luck and can't use this model with VLLM
π₯
π
7
2
#4 opened about 1 year ago
by
kronosprime
Update README.md
#3 opened about 1 year ago
by
pranay-ar