Skip to content

#embed fails when using default embedding model #141

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
oxaroky02 opened this issue Apr 28, 2025 · 2 comments
Closed

#embed fails when using default embedding model #141

oxaroky02 opened this issue Apr 28, 2025 · 2 comments

Comments

@oxaroky02
Copy link

I'm working with ollama via the OpenAI API proxy. Essentially, #embed doesn't handle the default embedding model, leading to a runtime error.

See end of post for suggested fix.

Using main

My Gemfile has the following:

gem 'ruby_llm', git: 'https://github.com/crmne/ruby_llm.git', branch: 'main'

Test program

require 'ruby_llm'

RubyLLM.configure do |config|
  config.openai_api_base = 'http://localhost:11434/v1'
  config.openai_api_key = 'ollama'
  config.default_embedding_model = 'nomic-embed-text'
end

embedding = RubyLLM.embed("Ruby is a programmer's best friend", 
  provider: 'openai',  assume_model_exists: true)

vector = embedding.vectors
puts "Vector dimension: #{vector.length}" 

Runtime error

xxxxxxxxxxx/vendor/bundle/ruby/3.4.0/bundler/gems/ruby_llm-347e630e9c9b/lib/ruby_llm/embedding.rb:26:in 'RubyLLM::Embedding.embed': undefined method 'connection' for an instance of String (NoMethodError)

      connection = context ? context.connection_for(provider) : provider.connection(config)
                                                                        ^^^^^^^^^^^
	from xxxxxxxxxxx/vendor/bundle/ruby/3.4.0/bundler/gems/ruby_llm-347e630e9c9b/lib/ruby_llm.rb:45:in 'RubyLLM.embed'
	from app/rubyllm/embed.rb:10:in '<main>'

Possible Fix

      config = context&.config || RubyLLM.config
      model ||= config.default_embedding_model   # <--- this is a quick fix
      model, provider = Models.resolve(model, provider: provider, assume_exists: assume_model_exists) if model
@oxaroky02
Copy link
Author

Hola @crmne, if you like I can submit a PR for the quick fix.

I took a quick look at the test specs and all the current #embed tests specify the model explicitly. I can add tests if you like, but caveat: I have zero SaaS LLM API keys, all my model use is local and I've never used VCR. 😄

@crmne crmne closed this as completed in b4cd7d0 May 6, 2025
@oxaroky02
Copy link
Author

Thank you, @crmne !!

crmne added a commit that referenced this issue May 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant