Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TFMA and TF KERAS 2.0 model on pretrained model #45

Open
OrielResearchCure opened this issue May 23, 2019 · 7 comments
Open

TFMA and TF KERAS 2.0 model on pretrained model #45

OrielResearchCure opened this issue May 23, 2019 · 7 comments

Comments

@OrielResearchCure
Copy link

Hello all,

I am referring here to stackoverflow that I have published couple of days ago: [https://stackoverflow.com/questions/56248024/tensorflow-model-analysis-tfma-for-keras-model]
I didn't receive any response. There might not be too many people that uses TFX with TF KERAS 2.0 at the moment. So, I am trying my luck here.
In general, I want to analyze a pre-trained (VGG16) model. The model was

  1. imported with TF KERAS 2.0 API
  2. saved
  3. loaded and converted to estimator using the keras to estimator API

However, the execution requires all VGG features in TF features format. What is the right way to extract these features?
Can anyone refer me to an example where TFX is being used with pre-trained model.

The code is available at the stackoverflow. if easer, I can copy it to here - let me know
Many thanks,
eilalan

@gowthamkpr gowthamkpr self-assigned this May 23, 2019
@mdreves
Copy link
Member

mdreves commented May 23, 2019

TF2.0 is not yet supported. We are actively working on it, so check back in a few weeks.

@OrielResearchCure
Copy link
Author

OrielResearchCure commented May 23, 2019

I understand. Therefore, I used

estimator_model = tf.keras.estimator.model_to_estimator(new_model,model_dir=TF_MODEL_DIR)]]
The error is raising from the VGG model that is being transferred

`# Load model 
new_model = keras.models.load_model(model_name)
new_model.summary()
# keras model to estimator
estimator_model = tf.keras.estimator.model_to_estimator(new_model,model_dir=TF_MODEL_DIR)]]

# The receiver function for the estimator
def eval_input_receiver_1_fn():

  serialized_tf_example = tf.compat.v1.placeholder(dtype=tf.string, shape=[None], name='input_example_placeholder')

  receiver_tensors = {'examples': serialized_tf_example}
  validation_features_columns = [tf.feature_column.numeric_column("image",shape=(192,192)),
                                 tf.feature_column.categorical_column_with_vocabulary_list("label",["normal_healthy","sick"])]

  feature_spec =  tf.feature_column.make_parse_example_spec(validation_features_columns)
  features = tf.io.parse_example(serialized_tf_example, feature_spec)

  return tfma.export.EvalInputReceiver(
    features=features,
    receiver_tensors=receiver_tensors,
    labels=features['label'])

import os
import shutil
from pathlib import Path

def up_one_dir(path):
  """Move all file in path up one
  """
  parent_dir = str(Path(path).parents[0])
  for f in os.listdir(path):
    shutil.copy(os.path.join(path,f),parent_dir)
  #shutil.rmtree(path)
up_one_dir(KERAS_FOLDER)

tfma.export.export_eval_savedmodel(estimator=estimator_model,
                                   export_dir_base=EXPORT_DIR,
                                   eval_input_receiver_fn=eval_input_receiver_1_fn)

The following error is fired regarding the pre-trained model features:
KeyErrorTraceback (most recent call last)
<ipython-input-137-b275096a314a> in <module>()
      1 tfma.export.export_eval_savedmodel(estimator=estimator_model,
      2                                    export_dir_base=EXPORT_DIR,
----> 3                                    eval_input_receiver_fn=eval_input_receiver_1_fn)

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_model_analysis/util.pyc in wrapped_fn(*args, **kwargs)
    171                       (fn.__name__, kwargs.keys()))
    172 
--> 173     return fn(**kwargs_to_pass)
    174 
    175   return wrapped_fn

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_model_analysis/eval_saved_model/export.pyc in export_eval_savedmodel(estimator, export_dir_base, eval_input_receiver_fn, serving_input_receiver_fn, assets_extra, checkpoint_path)
    472       },
    473       assets_extra=assets_extra,
--> 474       checkpoint_path=checkpoint_path)
    475 
    476 

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow/python/util/deprecation.pyc in new_func(*args, **kwargs)
    322               'in a future version' if date is None else ('after %s' % date),
    323               instructions)
--> 324       return func(*args, **kwargs)
    325     return tf_decorator.make_decorator(
    326         func, new_func, 'deprecated',

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/contrib/estimator/python/estimator/export.pyc in export_all_saved_models(estimator, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path)
    206       assets_extra=assets_extra,
    207       as_text=as_text,
--> 208       checkpoint_path=checkpoint_path)

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.pyc in experimental_export_all_saved_models(self, export_dir_base, input_receiver_fn_map, assets_extra, as_text, checkpoint_path)
    820         self._add_meta_graph_for_mode(
    821             builder, input_receiver_fn_map, checkpoint_path,
--> 822             save_variables, mode=model_fn_lib.ModeKeys.EVAL)
    823         save_variables = False
    824       if input_receiver_fn_map.get(model_fn_lib.ModeKeys.PREDICT):

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.pyc in _add_meta_graph_for_mode(self, builder, input_receiver_fn_map, checkpoint_path, save_variables, mode, export_tags, check_variables)
    895           labels=getattr(input_receiver, 'labels', None),
    896           mode=mode,
--> 897           config=self.config)
    898 
    899       export_outputs = model_fn_lib.export_outputs_for_mode(

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/estimator.pyc in _call_model_fn(self, features, labels, mode, config)
   1110 
   1111     logging.info('Calling model_fn.')
-> 1112     model_fn_results = self._model_fn(features=features, **kwargs)
   1113     logging.info('Done calling model_fn.')
   1114 

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/keras.pyc in model_fn(features, labels, mode)
    276 
    277     model = _clone_and_build_model(mode, keras_model, custom_objects, features,
--> 278                                    labels)
    279     model_output_names = []
    280     # We need to make sure that the output names of the last layer in the model

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/keras.pyc in _clone_and_build_model(mode, keras_model, custom_objects, features, labels)
    184   K.set_learning_phase(mode == model_fn_lib.ModeKeys.TRAIN)
    185   input_tensors, target_tensors = _convert_estimator_io_to_keras(
--> 186       keras_model, features, labels)
    187 
    188   compile_clone = (mode != model_fn_lib.ModeKeys.PREDICT)

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/keras.pyc in _convert_estimator_io_to_keras(keras_model, features, labels)
    157 
    158   input_tensors = _to_ordered_tensor_list(
--> 159       features, input_names, 'features', 'inputs')
    160   target_tensors = _to_ordered_tensor_list(
    161       labels, output_names, 'labels', 'outputs')

/usr/local/envs/py2env/lib/python2.7/site-packages/tensorflow_estimator/python/estimator/keras.pyc in _to_ordered_tensor_list(obj, key_order, obj_name, order_name)
    139                 order_name=order_name, order_keys=set(key_order),
    140                 obj_name=obj_name, obj_keys=set(obj.keys()),
--> 141                 different_keys=different_keys))
    142 
    143       return [_convert_tensor(obj[key]) for key in key_order]

KeyError: "The dictionary passed into features does not have the expected inputs keys defined in the keras model.\n\tExpected keys: set([u'vgg16_input'])\n\tfeatures keys: set(['image', 'label'])\n\tDifference: set(['image', 'label', u'vgg16_input'])"

Can I make that work?

@mdreves
Copy link
Member

mdreves commented May 23, 2019

I should have been clearer. Support for TF 2.0 estimators (keras model_to_estimator or otherwise) is not yet supported.

That said, the error you are getting is because keras does not allow you to pass more features than the model defines. You need to either add vgg16_input to the model or remove it from the features.

@OrielResearchCure
Copy link
Author

Thank you so much for your response. will try to add to my features list. I want to make sure that I can have viz support for transfer models. Event if it is not perfect at the moment. will keep you posted if I have any additional issue. Thanks again, eilalan

@OrielResearchCure
Copy link
Author

It doesn't look like that I will able to load the TF 2.0 model with TF 1.13.1 that supports TFMA - an improper config error is being fired. I will wait for your updated version. Meanwhile, please let me know if you are familiar with other tools to analyze models that is saved as h5. Thanks, eilalan

@PCiunkiewicz
Copy link

I'm having a similar issue when attempting to use TFMA to slice and display statistics on raw features before they have gone through the TFTransform component. Are there any updates on this issue?

@mdreves
Copy link
Member

mdreves commented Dec 10, 2019

As of now, keras model_to_estimator requires that only the features used by the model be passed, so you would not be able to slice on additional features without additional work run another feature extractor in the pipeline.

+cc @tanzhenyu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants