Skip to main content

.cortexrc

Cortex supports using a config-based approach to configuring most of its functionality. During the installation process, a .cortexrc will be generated with some sensible defaults in it. Using this file, you can change the location and name of the data directory, the Cortex API server port, the host and more.

File Location

The configuration file is stored in the following locations:

  • Windows: C:\Users\<username>\.cortexrc
  • Linux: /home/<username>/.cortexrc
  • macOS: /Users/<username>/.cortexrc

Configuration Parameters

You can configure the following parameters in the .cortexrc file:

ParameterDescriptionDefault Value
dataFolderPathPath to the folder where .cortexrc located.User's home folder.
apiServerHostHost address for the Cortex.cpp API server.127.0.0.1
apiServerPortPort number for the Cortex.cpp API server.39281
logFolderPathPath the folder where logs are locatedUser's home folder.
logLlamaCppPathThe llama-cpp engine ../logs/cortex.log
logOnnxPathThe onnxruntime engine log file path../logs/cortex.log
maxLogLinesThe maximum log lines that write to file.100000
checkedForUpdateAtThe last time for checking updates.0
latestReleaseThe lastest release vesion.Empty string
huggingFaceTokenHuggingFace token.Empty string

In the future, every parameter will be editable from the Cortex CLI. At present, only a selected few are configurable.

Example of the .cortexrc file:


logFolderPath: /home/<user>/cortexcpp
logLlamaCppPath: ./logs/cortex.log
logTensorrtLLMPath: ./logs/cortex.log
logOnnxPath: ./logs/cortex.log
dataFolderPath: /home/<user>/cortexcpp
maxLogLines: 100000
apiServerHost: 127.0.0.1
apiServerPort: 39281
checkedForUpdateAt: 1737636738
checkedForLlamacppUpdateAt: 1737636592699
latestRelease: v1.0.8
latestLlamacppRelease: v0.1.49
huggingFaceToken: ""
gitHubUserAgent: ""
gitHubToken: ""
llamacppVariant: linux-amd64-avx2-cuda-12-0
llamacppVersion: v0.1.49
enableCors: true
allowedOrigins:
- http://localhost:39281
- http://127.0.0.1:39281
- http://0.0.0.0:39281
proxyUrl: ""
verifyProxySsl: true
verifyProxyHostSsl: true
proxyUsername: ""
proxyPassword: ""
noProxy: example.com,::1,localhost,127.0.0.1
verifyPeerSsl: true
verifyHostSsl: true
sslCertPath: ""
sslKeyPath: ""
supportedEngines:
- llama-cpp
- onnxruntime
- tensorrt-llm