Prompt Params Panel

This panel is used for handling the prompt parameters that are used in generating an image. A “prompt” refers to the entire set of data used to generate an image.

This panel is located under the Texturaizer Tab of the 3D viewport. If the tab is not visible, try pressing “n” when hovering over the 3D viewport.



Simple UI

The Simplified UI contains everything you need to get started with Texturaizer.

See Advanced UI

Simplified Prompts Panel

Select Preset

A dropdown to select from preset model types. Selecting a mode will configure certain settings (most of which are only accessible in the Advanced UI). The selected preset is displayed next to the button.

Workflow and Model

ComfyUI uses “workflows”, which are specific generation node scripts that determine how to generate an image. These workflows are customizable within the ComfyUI Web UI Node Interface. The plugin comes with a set of default workflows. Currently, custom workflows cannot be uploaded into Blender, but they can utilize Texturaizer data through the Web UI (look out for an update on this soon!).

Workflow

Select the workflow to use for generation. Some workflows require additional ComfyUI dependencies. See the ‘Workflows’ page to learn more. The “SD Starter” workflow does not require additional dependencies and can be used with Stable Diffusion 1.5 and SDXL models. Workflows are defined as .json files and can be found under resources/workflows of the add-on directory.

Model

This is the AI model to use for the generation, typically large .ckpt or .safetensors files found in the models/checkpoints folder of ComfyUI. To access models, you must connect to the ComfyUI server. For more information on models and downloads, see the ‘Models’ page.

FLUX

If a Flux preset mode is selected (Flux Dev or Flux Schnell), there will be an additional three required models: Clip 1, Clip 2, and VAE.

  • Clip 1 and 2: Flux Models do not have the text encoder baked in, and so two additional text encoder models (often referred to as “Clip” models) are required for text conditioning.
  • VAE (Variable Auto Encoder): The VAE, used to encode and decode images between the latent space, is also not embedded within Flux models, so it needs to be specified separately during the generation process.

NOTE: Make sure to switch your workflow over to “Texturaizer Flux

NOTE: In order to use the Texturaizer Flux workflow, you must install this node pack to your ComfyUI custom nodes folder: https://github.com/city96/ComfyUI-GGUF. Or find it as ComfyUI-GGUF in the Comfy Manager


Prompts

Prompts guide the generation process. For more information on creating effective prompts, check out this article.

To the right of each prompt is an Expand Prompt Editor button to open a custom Texturaizer text editor pop up. This provides a better typing experience with shortcuts for advanced prompt editing.

Positive Prompt

Describe your image. Separate terms or phrases with a comma “,”.

Negative Prompt

Excludes certain concepts from the generation. It’s recommended to generate without a negative prompt first and add one if necessary (e.g., for “a car driving along a cliffside,” a negative prompt could be “blue”).


Seed Row

  • Seed: An integer used for generating deterministic random noise at the start of the diffusion process. Changing the seed produces different images.
  • Randomize Seed: Generates a new random seed value.
  • Auto: Randomizes the seed after each generation. Combined with Auto Export, this creates continuous generations.

Denoise

Base Image Denoise

Determines how much the starting image is used, with a value of 0 preserving the base image and 1 fully changing it. Note: Values below 0.5 may not significantly alter the image.

Use Empty Latent Image

Ignores the denoise value and base image, generating a fully random starting image.


Style Prompt

The Style Prompt is a preset text prompt that gets combined with the main positive and negative prompts. These are to make switching between different styles of images quicker.

With watercolor style
Without style

ControlNet

ControlNet uses additional data to guide generation, affecting composition while keeping color and lighting. It enhances segmented prompts.

Add ControlNet

Choose from available types, which store preset models and preprocessed images for guidance.

ControlNet Entry
  • Enable/Disable Button: Determines use of the ControlNet.
  • Strength: Strength Value between 0 and 1. * A higher than 1 value can be typed in.
  • Remove ControlNet Button: Permanently removes ControlNet from panel.
  • ControlNet Model: The trained model to used with the given ControlNet. Once this is selected for a given CN type, that model will be defaulted when adding a new CN of the same type.

Depth ControlNet

  • Additional Option to Calculate Depth From Scene. This will calculate the scene’s mist pass start and end distance. If the scene has more than 500,000 vertices the calculate depth option is replaced by the start and end distances in the simplified UI.

Canny Edge ControlNet

  • Additional options to set the low and high threshold for the canny edge detection.

NOTE: Only the canny ControlNet type at the moment is processed further within ComfyUI. The preprocessed image selected will be used for the canny edge detection.

LoRA

LoRA (Low-Rank Adaptation) is a method for fine-tuning large AI models by making small, targeted changes to specific parts of the model. This approach is efficient and allows for customizing how images are generated without having to retrain the entire model. A LoRA model may be trained on specific styles, characters, objects, or themes to achieve desired effects in image generation.

Add LoRA

Choose from available LoRAs

LoRA Entry
  • Enable/Disable: The check mark next to each lora will temporarily enable/disable it.
  • Strength: Each LoRA has a strength value. This is how much impact it has on the generation process. Strength values higher than 1 can be inputted manually.
  • Remove Lora: The X to the right of each lora will permanently delete it.


Advanced UI

The Advanced UI contains additional Options for finer control over results.

To save UI space, the different categories of PROMPT PARAMS will open and close automatically. To keep a category open, use the pin icon next to each title.

See Simple UI

Advanced Prompts Panel

Workflow and Model

ComfyUI uses “workflows”, which are specific generation scripts that determine how to generate an image. These workflows are customizable within the ComfyUI Web UI Node Interface. The plugin comes with a set of default workflows. Currently, custom workflows cannot be uploaded into Blender, but they can utilize Texturaizer data through the Web UI.

Workflow

Select the workflow to use for generation. Some workflows require additional ComfyUI dependencies. See the ‘Workflows’ page to learn more. The “SD Starter” workflow does not require additional dependencies and can be used with Stable Diffusion 1.5 and SDXL models. Workflows are defined as .json files and can be found under resources/workflows of the add-on directory.

Model

This is the AI model to use for the generation, typically large .ckpt or .safetensors files found in the models/checkpoints folder of ComfyUI. To access models, you must connect to the ComfyUI server. For more information on models and downloads, see the ‘Models’ page.

Architecture

Select the model architecture, which will affect the types of models and workflows that can be used. For more information, see the ‘Stable Diffusion or Flux?’ page.

FLUX

If “FLUX” architecture is selected, there will be an additional three required models: Clip 1, Clip 2, and VAE.

  • Clip 1 and 2: Flux Models do not have the text encoder baked in, and so two additional text encoder models (often referred to as “Clip” models) are required for text conditioning.
  • VAE (Variable Auto Encoder): The VAE, used to encode and decode images between the latent space, is also not embedded within Flux models, so it needs to be specified separately during the generation process.

Prompts

Prompts guide the generation process. For more information on creating effective prompts, visit the ‘Effective Prompting’ page.

To the right of each prompt is an Expand Prompt Editor button to open a custom Texturaizer text editor pop up. This provides a better typing experience with shortcuts for advanced prompt editing.

Positive Prompt G

Describe your image. For simple prompting, use this. For SDXL, complete sentences may yield better results.

Positive Prompt L

SDXL uses two positive prompts—G and L. While the separated prompt is not necessary, L is typically used for attributes (e.g., good lighting, render, high quality).

Negative Prompt

Excludes certain concepts from the generation. It’s recommended to generate without a negative prompt first and add one if necessary (e.g., for “a car driving along a cliffside,” a negative prompt could be “blue”).


Sampler

The sampler (kSampler) is usually the final step in the generation process, affecting quality and speed. For preset options, see the Parameters button next to Sampler.

Presets

A dropdown to select from preset sampler settings for different model types. Selecting “apply “Apply ___” will configure certain settings (most of which are only accessible in the Advanced UI). Clicking on the preset name will show the settings that will be applied.

Seed Row
  • Seed: An integer used for generating deterministic random noise at the start of the diffusion process. Changing the seed produces different images.
  • Randomize Seed: Generates a new random seed value.
  • Auto: Randomizes the seed after each generation. Combined with Auto Export, this creates continuous generations.
Guidance
  • CFG: Classifier-Free Guidance.
  • Flux Guidance: Replaces CFG when using the FLUX architecture.
Steps

The number of steps used for generating an image, affecting quality and time.

Batch Size

Number of images generated in parallel. Seeds are incremental (e.g., Seed 11 with Batch 4 produces seeds 11, 12, 13, 14).

Denoise
  • Determines how much the starting image is used, with a value of 0 preserving the base image and 1 fully changing it. Note: Values below 0.5 may not significantly alter the image.
  • Use Empty Latent Image: Ignores the denoise value and base image, generating a fully random starting image.
Sampler and Scheduler
  • The Sampler is the algorithm responsible for refining the image during the generation process. It determines how noise is gradually removed from the latent image, affecting both the quality and speed of the generation.
  • The Scheduler guides the progression of noise removal over the generation steps. It determines the rate and manner in which the noise is reduced, influencing the overall generation process.

ControlNet

ControlNet is a tool that uses extra information to guide how an image is generated. It helps shape the overall layout and composition of the image while keeping the original colors and lighting. This makes it especially useful for improving segmented prompts, ensuring that specific parts of the image are generated as intended.

Add ControlNet

Choose from available types, which store preset models and preprocessed images for guidance.

ControlNet Entry
  • Enable/Disable Button: Determines use of the ControlNet.
  • Strength: Strength Value between 0 and 1.

NOTE: A higher than 1 value can be typed in.

  • Start Factor: Value between 0 and 1 that determines that interpolates to which step of the generation to begin using the ControlNet.
  • End Factor: Value between 0 and 1 that determines that interpolates to which step of the generation to stop using the ControlNet.
  • Remove ControlNet Button: Permanently removes ControlNet from panel.
  • Prepressed Image: The base render layer to use with the ControlNet This image is used as is other than for the canny CN, which applies a canny edge detection to the image.
  • ControlNet Model: The trained model to used with the given ControlNet. Once this is selected for a given CN type, that model will be defaulted when adding a new CN of the same type.

Depth ControlNet

  • Option to invert depth pass. This is ON by default.
  • Option to Calculate Depth From Scene. This will calculate the scene’s mist pass start and end distance.
  • Start and end distance values. This determines the range that the depth is calculated within.

Canny Edge ControlNet

  • Additional options to set the low and high threshold for the canny edge detection.

NOTE: Only the canny ControlNet type at the moment is processed further within ComfyUI. The preprocessed image selected will be used for the canny edge detection.


LoRA

LoRA (Low-Rank Adaptation) is a method for fine-tuning large AI models by making small, targeted changes to specific parts of the model. This approach is efficient and allows for customizing how images are generated without having to retrain the entire model. A LoRA model may be trained on specific styles, characters, objects, or themes to achieve desired effects in image generation.

Add LoRA

Choose from available LoRAs

LoRA Entry
  • Enable/Disable: The check mark next to each lora will temporarily enable/disable it.
  • Strength: Each LoRA has a strength value. This is how much impact it has on the generation process. Strength values higher than 1 can be inputted manually.
  • Remove Lora: The X to the right of each lora will permanently delete it.

Style Prompt

The Style Prompt is a preset text prompt that gets combined with the main positive and negative prompts. These are to make switching between different styles of images quicker.

Selecting a style preset will set the positive and negative style prompts. These prompts can be modified separately however. The style prompts modify the scene prompts. The scene prompt will be placed within the “{prompt}” or at the end of the style prompt if the keyword is not present. For SDXL, if a style prompt contains a “.” anything after the “.” will be be applied to the Positive Prompt L.

  • Positive Prompt: Positive prompt that modifies the scene positive prompt.
  • Negative Prompt: Negative prompt that modifies the scene negative prompt.
With watercolor style
Without style

Image Prompt (IP Adapter)

IP Adapter is a tool that adjusts the AI’s image generation based on specific reference images. It helps the AI match certain styles, details, or textures from the reference, guiding the overall look while maintaining the original structure and colors of the generated image.

  • Currently supports up to two input images.
  • Encoder Weight: Each image has and encoder weight that determines the relative strength of the images to each other.
  • Loader: This setting defines how the reference images are loaded into the IP Adapter. It specifies the processing method used to extract features from the reference images.
  • Combine: This option determines how the two reference images are combined.
  • Embed Weight: This parameter adjusts how strongly the combined reference features influence the generated image overall. Higher weights result in a closer match to the reference styles/subjects.
  • Start: This indicates the initial step where the IP Adapter starts to influence the image generation process. Setting this at an early step will have a stronger impact on the initial composition.
  • End: This sets the step at which the IP Adapter stops influencing the image. Ending its effect earlier in the process allows the model to apply more of its default characteristics later in the generation.
  • Type: Determines the impact of the IP Adapter over the course of the generation.
  • Scaling: Embeds Scaling determines how the reference images influence the generated image. There are four options:
    • V Only: The original pre-2.0 implementation, focusing solely on vector scaling. It offers strong adherence to the reference image but may have less alignment with the prompt.
    • K+V: Incorporates both key and vector scaling, allowing for better prompt adherence, while still reflecting the reference images.
    • K+V w/c Penalty: Adds a “channel penalty,” providing a sharper effect compared to K+V alone. It enhances prompt adherence but with a slight reduction in resemblance to the reference images.
    • K+(mean)V w/c Penalty: Further refines the balance between the reference images and prompt adherence by incorporating the mean of the vector scaling with the channel penalty. It offers an even sharper focus while retaining better alignment with the prompt.

NOTE: Image Prompt Is not available for all workflows. Try switching to “SD Complete” workflow.
Also Requires additional ComfyUI Nodes (https://github.com/cubiq/ComfyUI_IPAdapter_plus).