A scientific paper “Output Manipulation via LoRA for Generative AI” by I. Culafic et al., was presented at the 23rd International Symposium INFOTEH-JAHORINA, 20-22 March 2024. The training for the prediction models was takin around six hours on an NVIDIA RTX 4090 24GB VRAM GPU. This research will serve as a basis for a future experiments on HPC resources. The paper is published at IEEE Xplore at: https://ieeexplore.ieee.org/document/10495995
ABSTRACT – Generative Artificial Intelligence has witnessed a surge in popularity in recent years, characterized by the emergence of groundbreaking models like DALL-E 2,
Midjourney, and Stable Diffusion, which have spearheaded advancements in this technological domain. This research aims to harness the potential of Stable Diffusion and its extensions for the purpose of training a LoRA (Low-Rank Adaptation) model to
generate images that closely resemble the original subject matter, utilizing a predetermined amount of example data. The primary objective of this research is to demonstrate the prowess of Stable Diffusion and generative AI in a broader context, delving into the possibilities offered by open-source frameworks, highlighting the
challenges associated with poorly organized training data and the advantages of properly organized and edited datasets, conducting a comparative analysis of diverse diffusion models and examining various LoRA strength examples. This research also aims to
compare the results from larger training parameters on both small and relatively large training models for the purpose of determining if overfitting, over training on one specific subject, is more prevalent with smaller or larger datasets.