summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorAlphara <42233094+xAlpharax@users.noreply.github.com>2023-07-04 23:57:11 +0300
committerGitHub <noreply@github.com>2023-07-04 23:57:11 +0300
commit86630d6715ec2b6396c1e4335e28c4ad30f9aef8 (patch)
tree1ef8c7c570613852177a3ac6ad488901d3f39902
parentfe7d87de108fae98c93381a1db4c93ec310b0ca2 (diff)
Update README.md
-rw-r--r--README.md12
1 files changed, 6 insertions, 6 deletions
diff --git a/README.md b/README.md
index db6bf3a..f8c5367 100644
--- a/README.md
+++ b/README.md
@@ -4,15 +4,15 @@ Neural Style Transfer done from the CLI using a VGG backbone and presented as an
Weights can be downloaded from [here](https://m1.afileditch.ch/ajjMsHrRhnikrrCiUXgY.pth). The downloaded file should be placed in `./weights/` and any file will be ignored from there when pushing, as seen in `./.gitignore`. Update: Alternatively, if the `./weights/` directory is empty, `./neuralart.py` will automatically download publicly available VGG19 weights for the user.
-More in depth information about Neural Style Transfer (NST) can be found in this great [paper](https://arxiv.org/abs/1705.04058). Make sure to check [Requirements](#requirements) and [Usage](#usage).
+More in depth information about Neural Style Transfer ( NST ) can be found in this great [paper](https://arxiv.org/abs/1705.04058). Make sure to check [Requirements](#requirements) and [Usage](#usage).
### Why use this in 2023 ?
Because Style Transfer hasn't changed drastically in terms of actual results in the past years. I personally find a certain beauty in inputting a style and content image rather than a well curated prompt with a dozen of switches. Consider this repo as a quick and simple ***just works*** solution that can run on both CPU and GPU effectively.
-I developed this tool as a means to obtain fancy images and visuals for me and my friends. It somehow grew into something bigger that is actually usable, so much so that I got to integrate it in a workflow in conjunction with [Stable Diffusion](https://github.com/CompVis/stable-diffusion) (see also [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui)).
+I developed this tool as a means to obtain fancy images and visuals for me and my friends. It somehow grew into something bigger that is actually usable, so much so that I got to integrate it in a workflow in conjunction with [Stable Diffusion](https://github.com/CompVis/stable-diffusion) ( see also [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui) ).
-### Requirements
+## Requirements
Clone the repository:
@@ -54,11 +54,11 @@ A helper script is also available to run `./stylize.sh` for each distinct pair o
./all.sh
```
-Moreover, `./all.sh` is aware of the a;ready rendered mp4 files in the current working directory and will skip stylizing the combinations that are already present.
+Moreover, `./all.sh` is aware of the already rendered mp4 files and will skip stylizing the combinations that are already present.
### Output videos/images and temporary files
-If, at any point, curious of the individual frames that comprise the generated `./content_in_style.mp4` check `./Output/` for PNG images with exactly that. Keep in mind that these files get removed and overwritten each time ./stylize.sh is called (this is also why running multiple instances of the script in `./stylize.sh` is advised against; if you need something batched/automated, try `./all.sh`)
+If, at any point, curious of the individual frames that comprise the generated `./content_in_style.mp4` check `./Output/` for PNG images with exactly that. Keep in mind that these files get removed and overwritten each time ./stylize.sh is called ( this is also why running multiple instances of `./stylize.sh` is advised against; if you need to batch/automate the process, try `./all.sh`)
The `./images.npy` file contains raw numpy array data generated by `./neuralart.py` and is manipulated by `./renderer.py` to achieve the `./Output` directory of PNG images.
@@ -66,6 +66,6 @@ Considering this workflow, `./clear_dir.sh` removes temporary files each time a
## Contribuiting
-Any sort of help, especially regarding the QoS of the project, is appreciated. Feel free to open an issue in the **Issues** tab and discuss the possible changes there. As of now, *neural-art* would be in great need of a clean and friendly arguments handler (i.e. like the ones the `argparse` python package provides) in order to provide a cleaner interface for working with `./neuralart.py` and/or `./stylize.sh`.
+Any sort of help, especially regarding the QoS ( Quality of Service ) of the project, is appreciated. Feel free to open an issue in the **Issues** tab and discuss the possible changes there. As of now, **neural-art** would be in great need of a clean and friendly arguments handler ( i.e. like the one the `argparse` python package provides ) in order to accomodate to a cleaner interface for `./neuralart.py` and / or `./stylize.sh`.
Thank you. Happy neural-art-ing !