Gensyn-ai

  • CPU: Minimum 16GB RAM (more RAM recommended for larger models or datasets).

OR

  • GPU (Optional): Supported CUDA devices for enhanced performance:

    • RTX 3090

    • RTX 4090

    • A100

    • H100

    recommend GPUs with >=24GB vRAM.

  • Note: You can run the node without a GPU using CPU-only mode.

Create Ngrok account on here : https://dashboard.ngrok.com/get-started/setup/linux copy your token

1) Install Node

curl -o install_gensyn.sh curl -o install_gensyn.sh https://raw.githubusercontent.com/lukmanc405/testnet/refs/heads/main/gensyn/install_gensyn.sh && chmod +x install_gensyn.sh && ./install_gensyn.sh


2) Login

Create screen with named swarm

screen -S swarm

Input this code into your terminal

cd $HOME/rl-swarm && python3 -m venv .venv && ./run_rl_swarm.sh

Select your model

for type rtx 3080,3090 and below recommended choose math A 0.5 or 1.5

for type 4090 and above you can choose Math A 7

1- You have to receive Waiting for userData.json to be created... in log

2- Back to /root ,with ctrl A+D

Find this link and click or copy then paste to your browser

3- Login with your email

  • After login, your terminal starts installation.

4- Optional: Push models to huggingface

  • Enter your HuggingFace access token you've created when it prompted

  • This will need 2GB upload bandwidth for each model you train, you can pass it by entering


Node Name

  • Now your node started running, Find your name after word INFO:hivemind_exp.trainer.hivemind_grpo_trainer: , like mine is aquatic monstrous peacock as in the image below (You can use CTRL+SHIFT+F to search INFO:hivemind_exp.trainer.hivemind_grpo_trainer: in terminal


Screen commands

  • Minimize: CTRL + A + D

  • Return: screen -r swarm

  • Stop and Kill: screen -XS swarm quit


Backup

You need to backup swarm.pem.

VPS:

Connect your VPS using Mobaxterm client to be able to move files to your local system. Back up these files:**

  • /root/rl-swarm/swarm.pem

WSL:

Search \\wsl.localhost in your Windows Explorer to see your Ubuntu directory. Your main directories are as follows:

  • If installed via a username: \\wsl.localhost\Ubuntu\home\<your_username>

  • If installed via root: \\wsl.localhost\Ubuntu\root

  • Look for rl-swarm/swarm.pem

GPU servers (.eg, Hyperbolic):

1- Connect to your GPU server by entering this command in Windows PowerShell terminal

sftp -P PORT [email protected]
  • Replace [email protected] with your given GPU hostname

  • Replace PORT with your server port (in your server ssh connection command)

  • ubuntu is the user of my hyperbolic gpu, it can be anything else or it's root if you test it out for vps

Once connected, you’ll see the SFTP prompt:

sftp>

2- Navigate to the Directory Containing the Files

  • After connecting, you’ll start in your home directory on the server. Use the cd command to move to the directory of your files:

cd /home/ubuntu/rl-swarm

3- Download Files

  • Use the get command to download the files to your local system. They’ll save to your current local directory unless you specify otherwise:

get swarm.pem
  • Downloaded file is in the main directory of your Powershell or WSL where you entered the sFTP command.

    • If entered sftp command in Powershell, the swarm.pem file might be in C:\Users\<pc-username>.

  • You can now type exit to close connection. The files are in the main directory of your Powershell or WSL where you entered the first SFTP command.


Recovering Backup file (upload)

If you need to upload files from your local machine to the server.

  • WSL & VPS: Drag & Drop option.

GPU servers (.eg, Hyperbolic):

1- Connect to your GPU server using sFTP

2- Upload Files Using the put Command:

In SFTP, the put command uploads files from your local machine to the server.

put swarm.pem /home/ubuntu/rl-swarm/swarm.pem

Run on Hyperbolic GPUs

  • To install the node on Hyperbolic check this Guide: Rent & Connect to GPU

  • Add this flag: -L 3000:localhost:3000 in front of your Hyperbolic's SSH-command, this will allow you to access to login page via local system


Run on Vast.ai GPUs

  • 1- Register in Vast.ai

  • 2- Create ssh key in your local system (If you don't have already) with this Guide: step 1-5

  • 3- Paste SSH public key to Setting > SSH Keys here

  • 4- Select Pytorch(Vast) template here

  • 5- Choose a supported GPU (I recommend >=24GB Per-GPU vRAM)

  • 6- Increase Disk Space slidebar to 200GB

  • 7- Top-up with credits and rent it.

  • 8- Go to instances, refresh the page, click on key button

  • 9- Create an ssh key,

  • 10- Copy SSH Command, and Replace -L 3000:localhost:3000 in front of the command.

  • 11- Enter the command in Windows Powershell and run it


Node Health

Official Dashboard

Telegram Bot

Search you Node ID here with /check here: https://t.me/gensyntrackbot

  • Node-ID is near your Node name

  • ⚠️ If receiving EVM Wallet: 0x0000000000000000000000000000000000000000, your onchain-participation is not being tracked and you have to Install with New Email and Delete old swarm.pem


Troubleshooting:

⚠️ Upgrade viem & Node version in Login Page

1- Modify: package.json

cd rl-swarm
nano modal-login/package.json
  • Update: "viem": to "2.25.0"

2- Upgrade

cd rl-swarm
cd modal-login
yarn install

yarn upgrade && yarn add next@latest && yarn add viem@latest

cd ..

⚠️ CPU-only Users: Ran out of input

Navigate:

cd rl-swarm

Edit:

nano hivemind_exp/configs/mac/grpo-qwen-2.5-0.5b-deepseek-r1.yaml
  • Lower max_steps to 5

Last updated