Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question]: Joint QA Summary Query Engine no openAI #12749

Closed
1 task done
Aekansh-Ak opened this issue Apr 11, 2024 · 2 comments
Closed
1 task done

[Question]: Joint QA Summary Query Engine no openAI #12749

Aekansh-Ak opened this issue Apr 11, 2024 · 2 comments
Labels
question Further information is requested

Comments

@Aekansh-Ak
Copy link

Question Validation

  • I have searched both the documentation and discord for an answer.

Question

Code-:

llm = HuggingFaceLLM(
model_name=model_id,
tokenizer_name=model_id,
query_wrapper_prompt=PromptTemplate(PROMPT),
context_window=3900,
max_new_tokens=256,
model_kwargs={"torch_dtype": torch.float16},
# tokenizer_kwargs={},
generate_kwargs={"temperature": 0.3, "top_p": 0.90},
device_map="auto",
)

Settings.llm = llm
Settings.embed_model = embeddings
Settings.context_window=4096,
Settings.num_output=1500,
Settings.chunk_overlap_ratio=0.3,

vsi = VectorStoreIndex.from_vector_store(
vector_store = vector_store
)

query_engine_builder = QASummaryQueryEngineBuilder(
llm=llm,
)

Error-:

Traceback (most recent call last):
File "/home/seceon/opensearch_lm_index/model2.py", line 123, in
query_engine_builder = QASummaryQueryEngineBuilder(
File "/usr/local/lib/python3.10/site-packages/llama_index/core/composability/joint_qa_summary.py", line 77, in init
self.service_context = service_context or ServiceContext.from_defaults()
File "/usr/local/lib/python3.10/site-packages/deprecated/classic.py", line 285, in wrapper_function
return wrapped
(*args_, **kwargs_)
File "/usr/local/lib/python3.10/site-packages/llama_index/core/service_context.py", line 187, in from_defaults
llm_predictor = llm_predictor or LLMPredictor(
File "/usr/local/lib/python3.10/site-packages/llama_index/core/service_context_elements/llm_predictor.py", line 108, in init
self._llm = resolve_llm(llm, callback_manager=callback_manager)
File "/usr/local/lib/python3.10/site-packages/llama_index/core/llms/utils.py", line 48, in resolve_llm
raise ValueError(
ValueError:


Could not load OpenAI model. If you intended to use OpenAI, please check your OPENAI_API_KEY.
Original error:
No API key found for OpenAI.
Please set either the OPENAI_API_KEY environment variable or openai.api_key prior to initialization.
API keys can be found or created at https://platform.openai.com/account/api-keys

To disable the LLM entirely, set llm=None.


This time I have defined QASummaryQueryEngineBuilder after LLM, still I am getting this issue.

@Aekansh-Ak Aekansh-Ak added the question Further information is requested label Apr 11, 2024
Copy link

dosubot bot commented Apr 11, 2024

The error you're encountering suggests that the LlamaIndex framework is attempting to use an OpenAI model by default, which requires an API key. Since you're not using OpenAI's models, you need to explicitly specify the model you're using or disable the LLM to avoid this error.

To resolve this, ensure you're specifying your model correctly when initializing or calling functions related to the LLMPredictor. If you're using a specific model, such as a local LlamaCPP model, you should specify the model path directly, like so:

llm = resolve_llm("local:/path/to/your/model")

Make sure the path is correct and that you have the necessary dependencies installed for your model.

Alternatively, if you want to disable the LLM entirely (for example, for testing purposes), you can set llm=None to use a mock model, bypassing the need for an OpenAI API key:

llm = resolve_llm(None)

This approach uses a MockLLM, allowing you to proceed without specifying an API key.

For more detailed guidance, refer to the documentation or source code of the LlamaIndex framework, particularly around the resolve_llm function, to ensure your setup aligns with the expected configuration. If the issue persists, providing more details about your setup and the steps leading to the error could help in identifying a more specific solution.

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@Aekansh-Ak
Copy link
Author

NameError: name 'resolve_llm' is not defined

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jul 16, 2024
@dosubot dosubot bot mentioned this issue Jul 17, 2024
1 task
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 23, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Jul 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

1 participant