Swift examples

Follow these examples if you're using the AIProxySwift package.

This is a young project. It includes a small client for OpenAI that routes all requests through AIProxy. You would use this client to add AI to your apps without building your own backend. Three levels of security are applied to keep your API key secure and your AI bill predictable: 1) certificate pinning 2) DeviceCheck verification 3) split key encryption.
Step 1

Add the AIProxySwift package to your Xcode project

  1. Open your Xcode project
  2. Select File > Add Package Dependencies
  3. Paste github.com/lzell/aiproxyswift into the package URL bar
  4. Click Add Package
Step 2

Initialize

Import AIProxy and initialize using the following code:

import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here"
)
Note to existing customers - If you previously used AIProxy.swift from our dashboard, or integrated with SwiftOpenAI, you will find that we initialize the aiproxy service slightly differently here. We no longer accept a deviceCheckBypass as an argument to the initializer of the service. It was too easy to accidentally leak the constant. Instead, you add the device check bypass as an environment variable. Please follow the steps in the next section for adding an environment variable to your project.
Step 3

Setup DeviceCheck

Add the AIPROXY_DEVICE_CHECK_BYPASS env variable to your Xcode project:

  • Type cmd-shift-comma to open up the "Edit Schemes" menu
  • Select Run in the sidebar
  • Add to the "Environment Variables" section (not the "Arguments Passed on Launch" section) an env variable with name AIPROXY_DEVICE_CHECK_BYPASS and value that we provided you in the AIProxy dashboard.

Get a chat completion from openai:

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here"
)
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [.init(role: "system", content: .text("hello world"))]
    ))
    print(response.choices.first?.message.content)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}
                
            

Send a multi-modal chat completion request to openai:

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here"
)
let imageURL = // get a local URL of your image, see OpenAIServiceTests.swift for an example
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .init(
                role: "system",
                content: .text("Tell me what you see")
            ),
            .init(
                role: "user",
                content: .parts(
                    [
                        .text("What do you see?"),
                        .imageURL(imageURL)
                    ]
                )
            )
        ]
    ))
    print(response.choices.first?.message.content)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}

How to ensure OpenAI returns JSON as the chat message content

Use responseFormat and specify in the prompt that OpenAI should return JSON only:

                import AIProxy

let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here"
)
do {
    let response = try await openAIService.chatCompletionRequest(body: .init(
        model: "gpt-4o",
        messages: [
            .init(
                role: "system",
                content: .text("Return valid JSON only")
            ),
            .init(
                role: "user",
                content: .text("Return alice and bob in a list of names")
            )
        ],
        responseFormat: .type("json_object")
    ))
    print(response.choices.first?.message.content)
}  catch AIProxyError.unsuccessfulRequest(let statusCode, let responseBody) {
    print("Received non-200 status code: \(statusCode) with response body: \(responseBody)")
} catch {
    print(error.localizedDescription)
}
                
            

Specify your own clientID to annotate requests

                let openAIService = AIProxy.openAIService(
    partialKey: "hardcode_partial_key_here",
    serviceURL: "hardcode_service_url_here",
    clientID: ""
)

Follow these examples if you're using the SwiftOpenAI package.

Step 1

Add the SwiftOpenAI package to your Xcode project

  1. Open your Xcode project
  2. Select File > Add Package Dependencies
  3. Paste https://github.com/jamesrochabrun/SwiftOpenAI into the package URL bar
  4. Click Add Package

SwiftOpenAI 3.4 contains security improvements for AIProxy customers. Please upgrade by navigating to the following section of your Xcode project:

Step 2

Initialize

Import SwiftOpenAI and initialize using the following code:

import SwiftOpenAI
                
let service = OpenAIServiceFactory.service(
    aiproxyPartialKey: "your_partial_key_goes_here",
    aiproxyServiceURL: "your_service_url_goes_here"
)
Step 3

Setup DeviceCheck

Add the AIPROXY_DEVICE_CHECK_BYPASS env variable to your Xcode project:

  • Type cmd-shift-comma to open up the "Edit Schemes" menu
  • Select Run in the sidebar
  • Add to the "Environment Variables" section (not the "Arguments Passed on Launch" section) an env variable with name AIPROXY_DEVICE_CHECK_BYPASS and value that we provided you in the AIProxy dashboard.

Chat completion example

import SwiftUI
import SwiftOpenAI

// Initialize service
// The "aiproxyPartialKey" is provided to you on the AIProxy dashboard.
let service = OpenAIServiceFactory.service(
    aiproxyPartialKey: "your_partial_key_goes_here",
    aiproxyServiceURL: "your_service_url_goes_here"
)

struct ChatCompletionView: View {

@State var jokeText:String = ""

var body: some View {
    VStack{
        Text(jokeText)
        Button("Tell a joke"){ tellJoke() }
    }
}

func tellJoke() {
    Task {
        jokeText = ""
        let prompt = "Tell me a joke"
        let parameters = ChatCompletionParameters(messages: [.init(role: .user, content: .text(prompt))], model: .gpt4o)
        let stream = try await service.startStreamedChat(parameters: parameters)
        for try await result in stream {
            guard let choice = result.choices.first,
                    let content = choice.delta.content else
            {
                return
            }
            jokeText += content
        }
    }
}
}

#Preview {
ChatCompletionView()
}

Translation example

import SwiftUI
import SwiftOpenAI

// Initialize service
// The "aiproxyPartialKey" is provided to you on the AIProxy dashboard.
let service = OpenAIServiceFactory.service(
    aiproxyPartialKey: "your_partial_key_goes_here",
    aiproxyServiceURL: "your_service_url_goes_here"
)

struct TranslationView: View {

private let prompt = "The response is an exact translation from english to spanish. You don't respond with any english."
@State var translatedText = ""

var body: some View {
    VStack{
        Text(translatedText)
        Button("Translate"){
            Task {
                await translate()
            }
        }
    }
}

func translate() async {
    let parameters = ChatCompletionParameters(
        messages: [
        .init(role: .system, content: .text(prompt)),
        .init(role: .user, content: .text("what time is dinner?")),
        ],
        model: .gpt4o
    )
    do {
        let choices = try await service.startChat(parameters: parameters).choices
        let message = choices.compactMap(\.message.content)
        translatedText = message.first ?? ""
    } catch {
        print("Could not translate")
    }
}
}

#Preview {
TranslationView()
}

Image generation example

import SwiftUI
import SwiftOpenAI

// Initialize service
// The "aiproxyPartialKey" is provided to you on the AIProxy dashboard.
let service = OpenAIServiceFactory.service(
    aiproxyPartialKey: "your_partial_key_goes_here",
    aiproxyServiceURL: "your_service_url_goes_here"
)

struct ImageGenerationView: View {

@State private var photoURL: URL = URL(string: "https://picsum.photos/256")!
@State private var loading: Bool = false

var body: some View {
    
    VStack(spacing:24){
        ZStack{
            AsyncImage(url: photoURL) { phase in
                if let image = phase.image {
                    // Display the loaded image
                    image
                        .resizable()
                        .aspectRatio(contentMode: .fit)
                        .cornerRadius(14)
                } else if phase.error != nil {
                    // Display a placeholder when loading failed
                    Image(systemName: "questionmark.diamond")
                        .imageScale(.large)
                } else {
                    // Display a placeholder while loading
                    ProgressView()
                }
            }
            if loading {
                ProgressView()
                    .padding(24)
                    .background(.ultraThinMaterial)
                    .controlSize(.large)
                    .cornerRadius(14)
            }
        }
        HStack{
            Button("Generate Image"){
                processChat(prompt: "a cactus wearing a sombrero")
            }
            .buttonStyle(.borderedProminent)
        }
    }
    .padding()
}

func processChat(prompt:String) {
    /// Show loading indicator
    loading = true
    /// Create image
    let createParameters = ImageCreateParameters(prompt: prompt, model: .dalle3(.largeSquare))
    Task {
        let result = try await service.createImages(parameters: createParameters).data.map(\.url)
        photoURL = result[0]!
        loading = false
    }
}
}

#Preview {
ImageGenerationView()
}

For more examples visit the resources page.

For more information about the SwiftOpenAI package visit the GitHub documentation.