Selecting Models
Choose the right models to maximize your earnings
How to Earn More Points?
Points are updated in real-time at Dria Edge AI Dashboard.
Check Active Models
Visit dria.co/edge-ai and check all the models listed under “Tasks Completed by Models (7 Days)” and the live “Data Generation Logs”. The currently supported models are also listed below.
Identify High-Demand Models
Observe which models have been getting the most tasks recently from these two sections.
Test Your Hardware
For locally hosted models, run dkn-compute-launcher measure
to test them on your device.
Check Performance
If a model achieves an Eval TPS score higher than 15, your device can likely run that model effectively for the network.
Start Running the Model
Configure your settings with dkn-compute-launcher settings
to run high-demand models that your hardware can support.
Supported Models
Here is the list of currently supported models you can run.
API-Based Models
These models do not require local hardware testing as their performance depends on the API provider. You will need valid API keys to run them.
- Claude 3.7 Sonnet
- Claude 3.5 Sonnet
- Gemini 2.5 Pro Experimental
- Gemini 2.0 Flash
- GPT-4o-mini
- GPT-4o
Locally-Hosted Models (Ollama)
You should test these models on your hardware using the dkn-compute-launcher measure
command to ensure they meet the performance requirements. The model names for use with the launcher are:
gemma3:4b
gemma3:12b
gemma3:27b
llama3.3:70b-instruct
llama3.1:8b-instruct
llama3.2:1b-instruct
mistral-nemo:12b
API-based models (like those from Gemini, OpenAI, and Claude) do not require local measurement with the measure
command, as their performance depends on the API provider, not your local hardware. You only need to measure locally hosted Ollama models.
Changing Models
Run the following command and select models from the menu:
Run the following command and select models from the menu:
Run the following command and select models from the menu:
What is TPS?
TPS stands for Tokens Per Second. It’s a measure of how fast the AI model can process text. A higher TPS generally means better performance. For Dria, the Eval TPS measured by the launcher is the key metric for local models.
Hardware Performance Benchmarks
Below are some performance benchmarks for running supported Ollama models on various cloud GPU configurations.
In addition to these locally hosted models, you can run any API level providers (Gemini, OpenRouter, OpenAI) regardless of your local specs (though you will need valid API keys).