Furkan Gözükara

MonsterMMORPG

AI & ML interests

Check out my youtube page SECourses for Stable Diffusion tutorials. They will help you tremendously in every topic

Articles

Organizations

Posts 37

view post
Post
2444
Detailed Comparison of JoyCaption Alpha One vs JoyCaption Pre-Alpha — 10 Different Style Amazing Images — I think JoyCaption Alpha One is the very best image captioning model at the moment for model training — Works very fast and requires as low as 8.5 GB VRAM

Where To Download And Install

You can download our APP from here : https://www.patreon.com/posts/110613301

1-Click to install on Windows, RunPod and Massed Compute
Official APP is here where you can try : fancyfeast/joy-caption-alpha-one

Have The Following Features

Auto downloads meta-llama/Meta-Llama-3.1–8B into your Hugging Face cache folder and other necessary models into the installation folder

Use 4-bit quantization — Uses 8.5 GB VRAM Total

Overwrite existing caption file

Append new caption to existing caption

Remove newlines from generated captions

Cut off at last complete sentence

Discard repeating sentences

Don’t save processed image

Caption Prefix

Caption Suffix

Custom System Prompt (Optional)

Input Folder for Batch Processing

Output Folder for Batch Processing (Optional)

Fully supported Multi GPU captioning — GPU IDs (comma-separated, e.g., 0,1,2)

Batch Size — Batch captioning
view post
Post
2178
I have done an extensive multi-GPU FLUX Full Fine Tuning / DreamBooth training experimentation on RunPod by using 2x A100–80 GB GPUs (PCIe) since this was commonly asked of me.

Full article here : https://medium.com/@furkangozukara/multi-gpu-flux-fu

Image 1
Image 1 shows that only first part of installation of Kohya GUI took 30 minutes on a such powerful machine on a very expensive Secure Cloud pod — 3.28 USD per hour
There was also part 2, so just installation took super time
On Massed Compute, it would take like 2–3 minutes
This is why I suggest you to use Massed Compute over RunPod, RunPod machines have terrible hard disk speeds and they are like lottery to get good ones



Image 2, 3 and 4
Image 2 shows speed of our very best config FLUX Fine Tuning training shared below when doing 2x Multi GPU training
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 3 shows VRAM usage of this config when doing 2x Multi GPU training
Image 4 shows the GPUs of the Pod


Image 5 and 6
Image 5 shows speed of our very best config FLUX Fine Tuning training shared below when doing a single GPU training
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 6 shows this setup used VRAM amount


Image 7 and 8
Image 7 shows speed of our very best config FLUX Fine Tuning training shared below when doing a single GPU training and Gradient Checkpointing is disabled
https://www.patreon.com/posts/kohya-flux-fine-112099700
Used config name is : Quality_1_27500MB_6_26_Second_IT.json
Image 8 shows this setup used VRAM amount


....