All Collections
VPS OS and Templates
How to Use the Ollama VPS Template
How to Use the Ollama VPS Template

Getting started with the Ollama VPS template at Hostinger

Updated over a week ago

Ollama is a versatile platform designed for running and fine-tuning machine learning models, including advanced language models like Llama3. The Ubuntu 24.04 with Ollama VPS template on Hostinger comes pre-installed with Ollama, the Llama3 model, and Open WebUI, providing an efficient way to manage and run these models. This guide will walk you through accessing and setting up Ollama.

If you don't have a VPS yet, check the available options here: VPS Hosting 🚀

Accessing Ollama

Open your web browser and navigate to:


Replacing [your-vps-ip] with the IP address of your VPS.

The first you access, you'll be prompted to create an account:

These will be your login credentials for subsequent access.

Getting to Know Open WebUI

Once logged in, explore the Open WebUI dashboard, where you can:

  • Monitor active models

  • Upload new datasets and models

  • Track model training and inference tasks

The pre-installed Llama3 model can be fine-tuned and managed through the interface. You can also add more models in the settings by clicking on the gear icon and selecting Models:

Running Inference with Llama3

You can use the Llama3 pre-trained model for inference directly through the Open WebUI interface, input custom prompts or datasets to see how the model responds.

Adjust the hyperparameters and experiment with fine-tuning on custom data to improve performance:

It's important to note that the response speed of models highly depends on the specific language model used and the number of CPU cores available. Larger models with more parameters often require more computational resources. If you're interested in expanding your machine-learning project, you can consider upgrading your VPS and get more CPU core counts for optimal performance.

Additional Resources

  • For more detailed information and advanced configurations, refer to the official Ollama documentation

Did this answer your question?