OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

GPT4All failed to load model - invalid model file

  • Thread starter Thread starter led8
  • Start date Start date
L

led8

Guest
I Installed the GPT4All Installer using GUI based installers for Mac.

Then, I downloaded the required LLM models and take note of the PATH they're installed to.

Now I'm trying to load the models through a python application using streamlit.

Here is my app.py file:

Code:
# Import app framework
import streamlit as st

# Import dependencies
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import GPT4All

# Path to weights
PATH = "/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin"

# Instance of llm
llm = GPT4All(model=PATH, verbose=True)

# Prompt template
prompt = PromptTemplate(input_variables=['question'],
                        template="""
                        Question: {question}
                        
                        Answer: Let's think step by step  
                        """)

# LLM chain
chain = LLMChain(prompt=prompt, llm=llm)

# Title
st.title('🦜🔗 GPT For Y\'all')

# Prompt text box
prompt = st.text_input('Enter your prompt here!')

# if we hit enter do this
if prompt: 
  # Pass the prompt to the LLM chain
  response = chain.run(prompt)
  
  st.write(response)

When I'm running streamlit run app.py I get the following message:

Code:
  You can now view your Streamlit app in your browser.

  Local URL: http://localhost:8501
  Network URL: http://10.81.128.79:8501

llama_model_load: loading model from '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' - please wait ...
llama_model_load: invalid model file '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' (unsupported format version 3, expected 1)
llama_init_from_file: failed to load model
[1]    3908 segmentation fault  streamlit run app.py

What's weird is that it's correctly work from the GPT4All desktop app but not from python code.
<p>I Installed the GPT4All Installer using GUI based installers for Mac.</p>
<p>Then, I downloaded the required LLM models and take note of the PATH they're installed to.</p>
<p>Now I'm trying to load the models through a python application using streamlit.</p>
<p>Here is my <code>app.py</code> file:</p>
<pre class="lang-py prettyprint-override"><code># Import app framework
import streamlit as st

# Import dependencies
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import GPT4All

# Path to weights
PATH = "/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin"

# Instance of llm
llm = GPT4All(model=PATH, verbose=True)

# Prompt template
prompt = PromptTemplate(input_variables=['question'],
template="""
Question: {question}

Answer: Let's think step by step
""")

# LLM chain
chain = LLMChain(prompt=prompt, llm=llm)

# Title
st.title('🦜🔗 GPT For Y\'all')

# Prompt text box
prompt = st.text_input('Enter your prompt here!')

# if we hit enter do this
if prompt:
# Pass the prompt to the LLM chain
response = chain.run(prompt)

st.write(response)
</code></pre>
<p>When I'm running <code>streamlit run app.py</code> I get the following message:</p>
<pre><code> You can now view your Streamlit app in your browser.

Local URL: http://localhost:8501
Network URL: http://10.81.128.79:8501

llama_model_load: loading model from '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' - please wait ...
llama_model_load: invalid model file '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' (unsupported format version 3, expected 1)
llama_init_from_file: failed to load model
[1] 3908 segmentation fault streamlit run app.py
</code></pre>
<p>What's weird is that it's correctly work from the GPT4All desktop app but not from python code.</p>
 
Top