trun your raycast pro to a OpenAI API compatible api server
you can use docker or run it directly
if you have Docker installed, you can run this command to start a server
docker run -dit --name raychat \
-p 8080:8080 \
-e EMAIL='your_email' \
-e PASSWORD='your_password' \
-e CLIENT_ID='your_client_id' \
-e CLIENT_SECRET='your_client_secret' \
-e EXTERNAL_TOKEN='your_fake_openai_token' \ # you can provide multi token like "token_a,token_b", token splitted with comma
--restart always \
vaalacat/raychat:latestor if you already have a token, you can run this command
docker run -dit --name raychat \
-p 8080:8080 \
-e TOKEN='your_token' \
-e EXTERNAL_TOKEN='your_fake_openai_token' \ # you can provide multi token like "token_a,token_b", token splitted with comma
--restart always \
vaalacat/raychat:latestthen you can use http://localhost:8080/v1/chat/completions to test your server, arm and amd64 are both supported
-
clone this repo and
cdinto it -
Install Raycast
-
Set Fiddler to capture traffic, get the
ClientIDandClientSecret -
Put the
ClientIDandClientSecretinto a.envfile, fill Email and Password -
Run
go run main.goyour server will start athttp://localhost:8080
you can use http://localhost:8080/v1/chat/completions to test your server