Spring AI - Ollama

Overview

Spring AI - Ollama (Chat Model)

Github: https://github.com/gitorko/project09

Spring AI

Ollama is a platform designed to allow developers to run large language models (LLMs) locally.

  1. GTP-4
  2. GTP-3
  3. LLaMA (Large Language Model Meta AI) - LLaMA is a series of models released by Meta (Facebook). It includes models like LLaMA-2 and LLaMA-3, available in multiple sizes.
  4. Alpaca

Code

 1package com.demo.project09;
 2
 3import java.util.Scanner;
 4
 5import org.springframework.ai.chat.client.ChatClient;
 6import org.springframework.ai.chat.model.ChatModel;
 7import org.springframework.ai.chat.model.ChatResponse;
 8import org.springframework.ai.chat.prompt.Prompt;
 9import org.springframework.ai.ollama.OllamaChatModel;
10import org.springframework.ai.ollama.api.OllamaApi;
11import org.springframework.boot.CommandLineRunner;
12import org.springframework.boot.SpringApplication;
13import org.springframework.boot.autoconfigure.SpringBootApplication;
14import org.springframework.context.annotation.Bean;
15
16@SpringBootApplication
17public class Main {
18
19    public static void main(String[] args) {
20        SpringApplication.run(Main.class, args);
21    }
22
23    @Bean
24    public CommandLineRunner runner(ChatModel chatModel) {
25        return args -> {
26
27            System.out.println("Starting..");
28            ChatResponse response = chatModel.call(
29                    new Prompt("Generate the names of 5 famous pirates."));
30            System.out.println(response.getResults());
31
32            Scanner scanner = new Scanner(System.in);
33            while(true) {
34                System.out.print("Ask Question: ");
35                // Read the input string
36                String inputString = scanner.nextLine();
37                // Output the input string
38                System.out.println("You Asked: " + inputString);
39                response = chatModel.call(
40                        new Prompt(inputString));
41                System.out.println("LLM Response: ");
42                System.out.println(response.getResults());
43            }
44        };
45    }
46
47}

Setup

 1# project09
 2
 3Spring AI with Ollama
 4
 5### Version
 6
 7Check version
 8
 9```bash
10$java --version
11openjdk 21
12```
13
14### Ollama
15
16Start the ollama container
17
18```bash
19docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
20```
21
22Configure
23
24```bash
25openssl s_client -showcerts -connect registry.ollama.ai:443 </dev/null | openssl x509 -outform PEM > ollama-registry.crt
26openssl s_client -showcerts -connect developers.cloudflare.com:443 </dev/null | openssl x509 -outform PEM > cloudflare.crt
27openssl s_client -showcerts -connect r2.cloudflarestorage.com:443 </dev/null | openssl x509 -outform PEM > cloudflarestorage.crt
28
29docker cp ollama-registry.crt ollama:/etc/ssl/certs//ollama-registry.crt
30docker cp cloudflare.crt ollama:/etc/ssl/certs//cloudflare.crt
31docker cp cloudflarestorage.crt ollama:/etc/ssl/certs/cloudflarestorage.crt
32
33docker exec -it ollama bash
34update-ca-certificates
35apt-get update && apt-get install -y ca-certificates
36exit
37docker restart ollama
38```
39
40Run llama3 model in ollama
41
42Model: Meta Llama 3 (4.7 GB size)
43
44```bash
45docker exec -it ollama ollama run llama3
46```
47
48Model: Meta Llama 3 70B (40 GB size)
49
50```bash
51docker exec -it ollama ollama run llama3:70b
52```
53
54Model: llama3.3 70B model (43 GB size)
55
56```bash
57docker exec -it ollama ollama run llama3.3
58```

References

https://spring.io/projects/spring-ai/

https://docs.spring.io/spring-ai/reference/api/chat/ollama-chat.html

https://hub.docker.com/r/ollama/ollama

https://ollama.com/

comments powered by Disqus