Getting started with TensorLay

From installation to your first AI generation in under 10 minutes.

System Requirements

Your PC (Windows)

  • Windows 10 or Windows 11
  • NVIDIA GPU with CUDA support (GTX 1060+ recommended)
  • At least 8 GB VRAM (16 GB recommended)
  • Internet connection
  • ~500 MB disk space for the app + models

Your VPS (Linux)

  • Any Linux distribution (Ubuntu 20.04+ recommended)
  • Python 3.8 or newer
  • Root or sudo access
  • Open SSH port (22)

Installation

  1. Download the installer

    Get the latest TensorLay-Setup.exe from the download section.

  2. Run the installer

    Double-click the installer and follow the prompts. TensorLay will be installed to C:\Program Files\TensorLay.

  3. Launch the app

    Open TensorLay from the desktop shortcut or Start Menu. The app will open and show your GPU information.

VPS Setup

Before pairing, you need to install the TensorLay relay on your VPS. This is a lightweight FastAPI service that handles the connection.

Run on your VPS
curl -sL https://tensorlay.com/install.sh | sudo bash

This command will:

  • Install the TensorLay relay service
  • Generate an 8-character pairing code
  • Start the relay on port 8090

Save the pairing code — you'll need it in the next step.

Pairing

  1. Click "Connect" in the app

    In the bottom-left corner, click the "Connect" button next to "SSH Tunnel".

  2. Enter your VPS IP address

    Type the IP address of your VPS (e.g., 185.70.184.239).

  3. Enter the pairing code

    Type the 8-character code from the VPS setup step.

  4. Click "Connect"

    SSH keys are exchanged automatically. The tunnel will establish and your services will become accessible from the VPS.

Managing Services

On the Home page, you'll see all available AI services with their status.

Service actions

  • Install — Downloads and sets up the service (git clone + pip install). This may take several minutes.
  • Start — Launches the service. It will become accessible on the VPS through the tunnel.
  • Stop — Stops the running service.
  • Uninstall — Removes the service from your PC.

Once a service is started and the tunnel is connected, AI agents on your VPS can access it via localhost:PORT (e.g., localhost:7860 for SD Forge).

Downloading Models

Navigate to the Models page to browse and download AI models. Models are downloaded directly to the correct service directory with progress tracking.

Model types

  • Checkpoints — Full Stable Diffusion models (2-7 GB each)
  • LoRAs — Fine-tuned style adapters (10-200 MB each)
  • LLMs — Language models for Ollama (1-40 GB each)
  • TTS voices — Voice models for AllTalk

Troubleshooting

Tunnel won't connect

  • Make sure the relay is running on your VPS: systemctl status tensorlay-relay
  • Check that port 22 (SSH) is open on the VPS
  • Try generating a new pairing code: python3 /opt/tensorlay-relay/relay.py --new-code

Service won't start

  • Check the Logs page for error details
  • Make sure you have enough VRAM for the service
  • Try uninstalling and reinstalling the service

App crashes on startup

  • Run .\TensorLay.exe 2>&1 in PowerShell to see the error
  • Make sure you have .NET 8 runtime installed
  • Try reinstalling the app