A simple and convenient interface for using various neural network models. You can communicate with LLM using text, voice and image input; use StableDiffusion, Kandinsky, Flux, HunyuanDiT, Lumina-T2X, Kolors, AuraFlow, Würstchen, DeepFloydIF, PixArt, CogView3-Plus and PlaygroundV2.5, to generate images; ModelScope, ZeroScope 2, CogVideoX and Latte to generate videos; StableFast3D, Shap-E, and Zero123Plus to generate 3D objects; StableAudioOpen, AudioCraft and AudioLDM 2 to generate music and audio; CoquiTTS, MMS and SunoBark for text-to-speech; OpenAI-Whisper and MMS for speech-to-text; Wav2Lip for lip-sync; LivePortrait for animate an image; Roop to faceswap; Rembg to remove background; CodeFormer for face restore; PixelOE for image pixelization; DDColor for image colorization; LibreTranslate and SeamlessM4Tv2 for text translation; Demucs and UVR for audio file separation; RVC for voice conversion. You can also view files from the outputs directory in gallery, download the LLM and StableDiffusion models, change the application settings inside the interface and check system sensors
The goal of the project - to create the easiest possible application to use neural network models
Install
and Update
files1) First install all RequiredDependencies
2) Git clone https://github.com/Dartvauder/NeuroSandboxWebUI.git
to any location
3) Run the Install.bat
, select your version and wait for installation
4) After installation, run Start.bat
and go through the initial setup
5) Wait for the application to launch and follow the link from the terminal
6) Now you can start generating. Enjoy!
Update.bat
Venv.bat
1) First install all RequiredDependencies
2) Git clone https://github.com/Dartvauder/NeuroSandboxWebUI.git
to any location
3) Run the ./Install.sh
, select your version and wait for installation
4) After installation, run ./Start.sh
and go through the initial setup
5) Wait for the application to launch and follow the link from the terminal
6) Now you can start generating. Enjoy!
./Update.sh
./Venv.sh
First of all, I want to thank the developers of PyCharm and GitHub. With the help of their applications, i was able to create and share my code
gradio
- https://github.com/gradio-app/gradiotransformers
- https://github.com/huggingface/transformersauto-gptq
- https://github.com/AutoGPTQ/AutoGPTQautoawq
- https://github.com/casper-hansen/AutoAWQexllamav2
- https://github.com/turboderp/exllamav2coqui-tts
- https://github.com/idiap/coqui-ai-TTSopenai-whisper
- https://github.com/openai/whispertorch
- https://github.com/pytorch/pytorchcuda-python
- https://github.com/NVIDIA/cuda-pythongitpython
- https://github.com/gitpython-developers/GitPythondiffusers
- https://github.com/huggingface/diffusersllama.cpp-python
- https://github.com/abetlen/llama-cpp-pythonstable-diffusion-cpp-python
- https://github.com/william-murray1204/stable-diffusion-cpp-pythonaudiocraft
- https://github.com/facebookresearch/audiocraftxformers
- https://github.com/facebookresearch/xformersdemucs
- https://github.com/facebookresearch/demucslibretranslatepy
- https://github.com/argosopentech/LibreTranslate-pyrembg
- https://github.com/danielgatis/rembgsuno-bark
- https://github.com/suno-ai/barkIP-Adapter
- https://github.com/tencent-ailab/IP-AdapterPyNanoInstantMeshes
- https://github.com/vork/PyNanoInstantMeshesCLIP
- https://github.com/openai/CLIPrvc-python
- https://github.com/daswer123/rvc-pythonaudio-separator
- https://github.com/nomadkaraoke/python-audio-separatorpixeloe
- https://github.com/KohakuBlueleaf/PixelOEk-diffusion
- https://github.com/crowsonkb/k-diffusionopen-parse
- https://github.com/Filimoa/open-parseAudioSR
- https://github.com/haoheliu/versatile_audio_super_resolutionsd_embed
- https://github.com/xhinker/sd_embedtriton
- https://github.com/triton-lang/triton/