How to Set Up RunPod for Texturaizer (Cloud Set Up)

11 Jan 2025

Imagine creating stunning AI-generated images and textures using Texturaizer, regardless of your computer’s specifications. Whether you’re working on a lightweight laptop while traveling or using a desktop setup, but looking for more performance, using Texturaizer with RunPod allows you to leverage powerful cloud GPUs from anywhere with an internet connection. Here’s a comprehensive guide to setting up RunPod for Texturaizer.


Disclaimer
Texturaizer is NOT affiliated with RunPod, and the setup described here is one example of how to utilize cloud GPUs for Texturaizer. Texturaizer will work with other cloud services, however, the setup and performance may vary depending on the service selected. If you know of other cloud services that may be beneficial for Texturaizer users, please contact us so we can look into them and share information with others! Contact us on discord


1. Sign Up and Log In to RunPod

  • Create an Account: Visit RunPod and sign up for a new account if you haven’t already.
  • Log In: Access your account by logging in with your credentials.

2. Add Funds to Your Account

  • Navigate to Billing: In the RunPod navigation panel, go to the “Billing” section.
  • Add Funds: Use the payment options to add funds to your account. As of writing this article, $10 is the minimum amount that can be added.

3. (Optional) Utilize a Network Volume for Persistent Storage

A Network Volume on RunPod allows multiple pods to share the same storage, making it particularly useful for saving your ComfyUI configuration, including models and custom nodes. By storing these files on a Network Volume, you can reuse them across sessions without the hassle of re-installing them each time you start a new pod. This can save both time and costs if you use RunPod frequently, as the faster boot times reduce resource expenses spent reinstalling everything. It also allows you to start and stop a pod as needed, so you can take “short breaks” without paying for downtime.

Keep in mind, Network Volumes are only available with Secure Cloud pods. It’s important to note that GPU pricing is higher on Secure Cloud compared to Community Cloud, and not all GPU options are available—particularly lower-cost, entry-level GPUs.

Using a Network Volume or Secure Cloud is optional but can be a significant time saver, especially if you frequently use ComfyUI on RunPod. HOWEVER, as stated above it may not be the most cost effective option. Be sure to research and evaluate the options to find the setup that best fits your needs and budget! You may want to try first on a cheaper community GPU, and if you are satisfied with the setup and plan to use it regularly, then upgrade to a network volume setup.

  • Create a Network Volume: In the RunPod navigation panel, go to the “Storage” section and create a new Network Volume. I recommend allocating at least 25GB of storage for basic use. If you want more flexibility to download additional models or plan to use it with Flux, consider opting for 50GB or more.
  • Ensure that when choosing a Data Center for your Network Volume, that it has the GPU you would like to use available. (For Instance RTX 3080 or RTX 4090).

4. Deploy a New Pod Using the Texturaizer ComfyUI Template

  • Navigate to Pods: In the RunPod navigation panel, click on the “Pods” section.
  • Select Pod: Click on “Deploy a Pod” to initiate a new deployment.
  • Select Network Volume: If using a network Volume, chose it from the corresponding dropdown menu.
  • Select Hight Internet Speed: If using Community Cloud set the internet speed to “Extreme” ideally, and at least “High” depending on the GPUs available. This will only give GPU options that are available on the fastest networks, and in my testing can cut setup time by half! (Plus increasing the network speed does not raise the price of a given GPU, so really no reason not to do this)
  • Configure Resources: Choose a GPU instance that meets your performance needs. For initial experimentation, a GPU with at least 10GB of VRAM is recommended. For more intensive generation workflows (for instance FLUX), consider a more powerful GPU. For reference I am using an RTX 4090 (24 GB of VRAM). A 4090 on Community Cloud costs around $0.34 an hour, and a 3080 costs around $0.17 per hour. A 3080 will be more than sufficient for most SDXL workflows. And a 3090 at $0.22 per hour will run essentially everything a 4090 can run (just a tad slower) at 60% the cost!
  • Choose a ComfyUI Template: Click “Change Template” and Search “Texturaizer” or “ComfyUI Texturaizer”. There are a few templates available.
    • The “ComfyUI Texturaizer” template comes with the ComfyUI manager and Texturaizer nodes preinstalled, which makes other installation quicker. This Template does not come with any models installed.
    • The ComfyUI Texturaizer – SDXL” template installs this SDXL checkpoint and ControlNet Model. This makes setup a lot easier but also increases the automated portion of the setup time.
    • A Flux Template will be available soon.
  • NOTE: If you do not see the templates when searching for it, please click on the direct access links of the template names above.

  • Switch ComfyUI Port From HTTP to TCP: If you are not using one of the Texturaizer Templates, THIS STEP IS CRITICAL! If using one of the ComfyUI Texturaizer Templates, this is already done for you so you can skip this step.
    • After selecting a template, click “Edit Template”. Go down to the ports options, and you may see something like:
      HTTP ports: 8888,8188 TCP Ports: 22
    • You must switch port 8188 from HTTP to TCP!
      * Other templates may use different port values, what is important is that the port that exposes ComfyUI is under TCP and not HTTP.
    • In this example it should look like this:
  • Deploy the Pod: After configuring, click “Deploy On-Demand” to launch your pod.

5. Access and Configure Your Pod

  • The pod will take 5-10 minutes to startup depending on network speed and template selected. If using a Network Volume, this time is only spent on the first startup, subsequent ones will be much faster.
  • Access the Pod: Once the pod is running, click on the “Connect” button. In the resulting popup select “Jupyter Lab” (highlighted here in red). Also take note of the Direct TCP port at the bottom of the popup (highlighted in blue). Specifically the one linked to the internal ComfyUI Port (8188 here). In this case 213.144.200.241:12768. We will use this address to access ComfyUI once the server is running.

If you see “Not Ready”, the template has not finished installing. You may want to reload the page every minute or so, as it doesn’t seem to always update on its own.

  • Your JupyterLab should something look like this. Select Terminal:
  • In the resulting terminal type ./run_gpu.sh and press enter.

In around 30 seconds or less, you should see this line near the bottom of the terminal. Unlike when running locally, The server address here is not what you will use to access ComfyUI. Instead we will use the TCP we took note of earlier, from out connection window in Runpod.


6. Start ComfyUI Web Interface

  • Open WebUI: In your browser paste the Direct TCP Port we took note of earlier. If you need it again, It can be found in your Pod’s connect settings. Again in this case my TCP port is 213.144.200.241:12768 (the one pointing to 8188). All steps done right, and you should see something like this:
  • Test Run: If using the “ComfyUI Texturaizer – SDXL” template, you will already have a model installed and you can test your settup by pressing “Queue”. It should generate an image like this:
  • If using a template without preinstalled models, you will need to install a model before proceeding.

7. Install Custom Nodes and Models

  • Not Using one of the Texturaizer Templates? If you are using a different template, you will need to ensure you have the Texturaizer nodes installed before proceeding.
  • Open the ComfyUI Manager: Click “Manager” at the top of the interface and then select “Custom Nodes Manager”
  • Install ComfyUI-Texturaizer: Search for Texturaizer and click install. Once installed you will need to restart the server. To do so return the the Manager home page and click “Restart”.
  • Install Custom Models: Weather you are using a template with preinstalled models or not, you may want to install custom models. There are a few ways to do this. The ComfyUI manager does have a built in model browser, but the selection of models is very limited. Instead, what can be done is to download a model to your local machine and then drag and drop that model into the appropriate folder in JupyterLab. To find some recommended models you can see the recommended models page.
  • To install a model from your local machine. Go to your JupyterLab workspace, and in the side panel, navigate to ComfyUI/models/checkpoints. You will not be able to access the checkpoints folder directly, but it can be accessed through the terminal. This video clip shows how you can do that. For models other than checkpoint models you can select the appropriate folder (ex. loras) and drag and drop a model downloaded to your local machine into that folder.

8. Connecting to Texturaizer/Blender

Once you have your pod running with the appropriate models and nodes installed, It is time to connect it to Texturaizer!

  • Install Texturaizer: You can follow this guide to get texturaizer installed. Ensure you have the latest available version of Texturaizer.
  • Set the server Address: In your Texturaizer Panel navigate to the actions panel, then click on the gear icon at the top right. This will open up your actions settings in a popup panel.
    • Disable “Use Local ComfyUI Install” (It should be grey).
    • Ensure “Embed Data” is enabled (Blue check).
    • Then Either:
      • Fill in the server host and port with the correct parts of the TCP address. If your TCP address is 213.144.200.241:12768, the Host would be everything before the “:” colon, and the port would be the number after the colon.
      • Alternatively check “Override Server” and then put the full address into the “Server Override” field. For this you must include the scheme which would be: http://213.144.200.241:12768/
    • Note: when starting a new pod, this server address will change, so you will have to re-enter the address each time.
  • Once filled out, click on the reload button next to the gear icon, and if the server is running and connects, the “Connect to Server” will change to “TEXTURAIZE”. Clicking on “TEXTURAIZE” will make it go blue and say “TEXTURAIZING”. At this point you are ready to generate!

For more information on creating Images, please see this guide or watch the getting started video!


By following these steps, you can effectively set up and utilize Texturaizer on RunPod, unlocking the full potential of cloud-based GPUs for your AI image generation projects.

Have Questions?

Join our Discord Community to ask questions, share your generations, and interact with other users!