10

I have two files, file1.py which have ML model size of 1GB and file2.py which calls get_vec() method from file1 and receives vectors in return. ML model is being loaded everytime when file1 get_vec() method is called. This is where it is taking lots of time (around 10s) to load the model from disk.

I want to tell file1 somehow not to reload model every time but utilize loaded model from earlier calls.

Sample code is as follows

# File1.py

import spacy
nlp = spacy.load('model')

def get_vec(post):
    doc = nlp(post)
    return doc.vector

File2.py

from File1 import get_vec

df['vec'] = df['text'].apply(lambda x: get_vec(x))

So here, it is taking 10 to 12 seconds in each call. This seems small code but it is a part of a large project and I can not put both in the same file.

Update1:

I have done some research and came to know that I can use Redis to store model in cache first time it runs and thereafter I can read the model from cache directly. I tried it for testing with Redis as follows

import spacy
import redis

nlp = spacy.load('en_core_web_lg')
r = redis.Redis(host = 'localhost', port = 6379, db = 0)
r.set('nlp', nlp)

It throws an error

DataError: Invalid input of type: 'English'. Convert to a bytes, string, int or float first.

Seems, type(nlp) is English() and it need to convert in a suitable format. So I tried to use pickle as well to convert it. But again, pickle is taking lots of time in encoding and decoding. Is there anyway to store this in Redis?

Can anybody suggest me how can I make it faster? Thanks.

2

5 Answers 5

11

Heres how to do it

Step 1) create a function in python and load your model in that function

model=None
def load_model():

    global model
    model = ResNet50(weights="imagenet")

if you carefully observe first I assigned variable model to None. Then inside load_model function I loaded a model.

Also I made sure the variable model is made global so that it can be accessed from outside this function. The intuition here is we load model object in a global variable. So that we can access this variable anywhere within the code.

Now that we have our tools ready (i.e we can access the model from anywhere within this code ) lets freeze this model in your computers RAM. This is done by:

if __name__ == "__main__":
    print(("* Loading Keras model and Flask starting server..."
        "please wait until server has fully started"))
    load_model()
    app.run()

Now what's the use of freezing model in RAM without using it. So, to use it I use POST request in flask

@app.route("/predict", methods=["POST"])
def predict():

    if flask.request.method == "POST":

            output=model.predict(data)  #what you want to do with frozen model goes here

So using this trick you can freeze model in RAM, access it using a global variable. and then use it in your code.

Sign up to request clarification or add additional context in comments.

1 Comment

This seems the right way to do it. It means I have to host my model in a Flask to make it work fast. Thanks for the suggestion.
2

Use Flask. See how this user tried to implement here: Simple Flask app using spaCy NLP hangs intermittently

Send your data frame data to your Flask through an HTTP request. Or you may save as a file and sent the file to the server.

Just load the model to a global variable and use the variable in the app code.

Comments

2

If all your syntax is correct then this should not load the model more than once. (Only in the constructor of the ml class)

# File1.py

import spacy
class ml:
   def __init__(self, model_path):
       self.nlp = spacy.load(model_path) # 'model'
   def get_vec(self, post):
       return self.nlp(post).vector


# File2.py

from File1 import ml

my_ml = ml('model') # pass model path

df['vec'] = df['text'].apply(lambda x: my_ml.get_vec(x))

2 Comments

Thanks for comment. I tried this code but not improving speed. Since I am calling file2.py from the terminal, I think after the execution, file variables are being destroyed. I have updated the question. Please see.
This has to do with how spacy internally loads model and manages caches and your question should be completely rephrased.
-3

Your problem is not clear to me. nlp = spacy.load('model') this line is executed only once in given code at the time of import. As every call to get_vec is not loading the model even then if it is taking 10-12 secs per call to get_vec then nothing can be done in your case.

1 Comment

Thanks for the suggestion. get_vec() is taking only 12ms average time.
-5

Save the model once trained.
And start using python as an Object Orientated Programming language not a script language.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.