Skip to content

Class: HTM::Configuration

Inherits: Object

HTM Configuration

HTM uses RubyLLM for multi-provider LLM support. Supported providers: * :openai (OpenAI API) * :anthropic (Anthropic Claude) * :gemini (Google Gemini) * :azure (Azure OpenAI) * :ollama (Local Ollama - default) * :huggingface (HuggingFace Inference API) * :openrouter (OpenRouter) * :bedrock (AWS Bedrock) * :deepseek (DeepSeek)

@example

HTM.configure do |config|
  config.embedding_provider = :openai
  config.embedding_model = 'text-embedding-3-small'
  config.tag_provider = :openai
  config.tag_model = 'gpt-4o-mini'
  config.openai_api_key = ENV['OPENAI_API_KEY']
end
@example
HTM.configure do |config|
  config.embedding_provider = :ollama
  config.embedding_model = 'nomic-embed-text'
  config.tag_provider = :ollama
  config.tag_model = 'llama3'
  config.ollama_url = 'http://localhost:11434'
end
@example
HTM.configure do |config|
  config.embedding_provider = :openai
  config.embedding_model = 'text-embedding-3-small'
  config.openai_api_key = ENV['OPENAI_API_KEY']
  config.tag_provider = :anthropic
  config.tag_model = 'claude-3-haiku-20240307'
  config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
end
@example
HTM.configure do |config|
  config.embedding_generator = ->(text) {
    MyApp::LLMService.embed(text)  # Returns Array<Float>
  }
  config.tag_extractor = ->(text, ontology) {
    MyApp::LLMService.extract_tags(text, ontology)  # Returns Array<String>
  }
  config.logger = Rails.logger
end

Attributes

anthropic_api_key[RW]

Returns the value of attribute anthropic_api_key.

azure_api_key[RW]

Returns the value of attribute azure_api_key.

azure_api_version[RW]

Returns the value of attribute azure_api_version.

azure_endpoint[RW]

Returns the value of attribute azure_endpoint.

bedrock_access_key[RW]

Returns the value of attribute bedrock_access_key.

bedrock_region[RW]

Returns the value of attribute bedrock_region.

bedrock_secret_key[RW]

Returns the value of attribute bedrock_secret_key.

chunk_overlap[RW]

Character overlap between chunks (default: 64)

chunk_size[RW]

Chunking configuration (for file loading)

circuit_breaker_failure_threshold[RW]

Circuit breaker configuration

circuit_breaker_half_open_max_calls[RW]

Successes to close (default: 3)

circuit_breaker_reset_timeout[RW]

Seconds before half-open (default: 60)

connection_timeout[RW]

Returns the value of attribute connection_timeout.

deepseek_api_key[RW]

Returns the value of attribute deepseek_api_key.

embedding_dimensions[RW]

Returns the value of attribute embedding_dimensions.

embedding_generator[RW]

Returns the value of attribute embedding_generator.

embedding_model[RW]

Returns the value of attribute embedding_model.

embedding_provider[RW]

Returns the value of attribute embedding_provider.

embedding_timeout[RW]

Returns the value of attribute embedding_timeout.

extract_propositions[RW]

Returns the value of attribute extract_propositions.

gemini_api_key[RW]

Returns the value of attribute gemini_api_key.

huggingface_api_key[RW]

Returns the value of attribute huggingface_api_key.

job_backend[RW]

Returns the value of attribute job_backend.

logger[RW]

Returns the value of attribute logger.

max_embedding_dimension[RW]

Limit configuration

max_tag_depth[RW]

Max tag hierarchy depth (default: 4)

ollama_url[RW]

Returns the value of attribute ollama_url.

openai_api_key[RW]

Provider-specific API keys and endpoints

openai_organization[RW]

Provider-specific API keys and endpoints

openai_project[RW]

Provider-specific API keys and endpoints

openrouter_api_key[RW]

Returns the value of attribute openrouter_api_key.

proposition_extractor[RW]

Returns the value of attribute proposition_extractor.

proposition_model[RW]

Returns the value of attribute proposition_model.

proposition_provider[RW]

Returns the value of attribute proposition_provider.

proposition_timeout[RW]

Returns the value of attribute proposition_timeout.

relevance_access_weight[RW]

Access frequency weight (default: 0.1)

relevance_recency_half_life_hours[RW]

Decay half-life in hours (default: 168 = 1 week)

relevance_recency_weight[RW]

Temporal freshness weight (default: 0.1)

relevance_semantic_weight[RW]

Relevance scoring weights (must sum to 1.0)

relevance_tag_weight[RW]

Tag overlap weight (default: 0.3)

tag_extractor[RW]

Returns the value of attribute tag_extractor.

tag_model[RW]

Returns the value of attribute tag_model.

tag_provider[RW]

Returns the value of attribute tag_provider.

tag_timeout[RW]

Returns the value of attribute tag_timeout.

telemetry_enabled[RW]

Enable OpenTelemetry metrics (default: false)

token_counter[RW]

Returns the value of attribute token_counter.

week_start[RW]

Returns the value of attribute week_start.

Instance Methods

configure_ruby_llm(providernil)

Configure RubyLLM with the appropriate provider credentials

@param [Symbol] The provider to configure (:openai, :anthropic, etc.)

initialize()

@return [Configuration] a new instance of Configuration

normalize_ollama_model(model_name)

Normalize Ollama model name to include tag if missing

Ollama models require a tag (e.g., :latest, :7b, :13b). If the user specifies a model without a tag, we append :latest by default.

@param [String] Original model name

@return [String] Normalized model name with tag

reset_to_defaults()

Reset to default RubyLLM-based implementations

validate!()

Validate configuration