top of page

This blog is the continuation of Transfer Learning Part 1

In the previous blogs, we observed transfer learning is getting far better results on the Food Vision experiment than our own models, even with the less dataset.

In this blog, we would discuss the fine-tuning transfer learning, pre-trained model are unfrozen and tweaked to better suit our data. For feature extraction transfer learning, you may only train the top 1-3 layers of a pre-trained model with your own data, in fine-tuning transfer learning, you might train 1-3+ layers of a pre-trained model (where the '+' indicates that many or all of the layers could be trained).

First, let's create a few functions to help with the repetitive tasks below, rewriting the same code becomes tedious.

import tensorflow as tf
import datetime
import matplotlib.pyplot as plt
import zipfile
import os

def create_tensorboard_callback(dir_name, experiment_name):
  """
  Creates a TensorBoard callback instand to store log files.

  Stores log files with the filepath:
    "dir_name/experiment_name/current_datetime/"

  Args:
    dir_name: target directory to store TensorBoard log files
    experiment_name: name of experiment directory (e.g. efficientnet_model_1)
  """
  log_dir = dir_name + "/" + experiment_name + "/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
  tensorboard_callback = tf.keras.callbacks.TensorBoard(
      log_dir=log_dir
  )
  print(f"Saving TensorBoard log files to: {log_dir}")
  return tensorboard_callback
  
  
# Plot the validation and training data separately
def plot_loss_curves(history):
  """
  Returns separate loss curves for training and validation metrics.

  Args:
    history: TensorFlow model History object (see: https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/History)
  """ 
  loss = history.history['loss']
  val_loss = history.history['val_loss']

  accuracy = history.history['accuracy']
  val_accuracy = history.history['val_accuracy']

  epochs = range(len(history.history['loss']))

  # Plot loss
  plt.plot(epochs, loss, label='training_loss')
  plt.plot(epochs, val_loss, label='val_loss')
  plt.title('Loss')
  plt.xlabel('Epochs')
  plt.legend()

  # Plot accuracy
  plt.figure()
  plt.plot(epochs, accuracy, label='training_accuracy')
  plt.plot(epochs, val_accuracy, label='val_accuracy')
  plt.title('Accuracy')
  plt.xlabel('Epochs')
  plt.legend();
  
# Create function to unzip a zipfile into current working directory 
# (since we're going to be downloading and unzipping a few files)

def unzip_data(filename):
  """
  Unzips filename into the current working directory.

  Args:
    filename (str): a filepath to a target zip folder to be unzipped.
  """
  zip_ref = zipfile.ZipFile(filename, "r")
  zip_ref.extractall()
  zip_ref.close()
 
# Walk through an image classification directory and find out how many files (images)
# are in each subdirectory.

def walk_through_dir(dir_path):
  """
  Walks through dir_path returning its contents.

  Args:
    dir_path (str): target directory
  
  Returns:
    A print out of:
      number of subdiretories in dir_path
      number of images (files) in each subdirectory
      name of each subdirectory
  """
  for dirpath, dirnames, filenames in os.walk(dir_path):
    print(f"There are {len(dirnames)} directories and {len(filenames)} images in '{dirpath}'.") 

Working with less Data(Food Classes)


In my previous blog, we see we could get great results using 10% training data using transfer learning. in this blog, we're going to continue to work with smaller subsets of the data, except this time we'll have a look at how we can use the in-built pre-trained models within the tf.keras.applications module as well as how to fine-tune them to our own custom dataset. Finally, we'll also be practicing using the Keras Functional API for building deep learning models. The Functional API is a more flexible way to create models than the tf.keras.Sequential API.


Let's start downloading some data

# Get 10% of the data of the 10 classes
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip 

unzip_data("10_food_classes_10_percent.zip")

The dataset we're downloading is the 10 food classes dataset (from Food 101) with 10% of the training images we used in the previous blog.

# Walk through 10 percent data directory and list number of files
walk_through_dir("10_food_classes_10_percent")

We can see that each of the training directories contains 75 images and each of the testing directories contain 250 images.


Let's define our training and test file paths.

# Create training and test directories
train_dir = "10_food_classes_10_percent/train/"
test_dir = "10_food_classes_10_percent/test/"

Now we've got some image data, we need a way of loading it into a TensorFlow compatible format. Previously, we've used the ImageDataGenerator class. And while this works well and is still very commonly used, this time we're going to use the image_data_from_directory function.

One of the main benefits of using tf.keras.prepreprocessing.image_dataset_from_directory() rather than ImageDataGenerator is that it creates a tf.data.Dataset object rather than a generator. The main advantage of this is the tf.data.Dataset API is much more efficient (faster) than the ImageDataGenerator API which is paramount for larger datasets.


Let's see how it looks

# Create data inputs
import tensorflow as tf
IMG_SIZE = (224, 224) # define image size
train_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(directory=train_dir,                                                                                                                                            
                                                    image_size=IMG_SIZE,                                                                       
                                               label_mode="categorical", 
                                             # what type are the labels?
                                                    batch_size=32) 
            # batch_size is 32 by default, this is generally a good number
test_data_10_percent = tf.keras.preprocessing.image_dataset_from_directory(directory=test_dir,                                                               
                                                    image_size=IMG_SIZE,                                                           
                                                label_mode="categorical")


The main parameters we're concerned about in the image_dataset_from_directory() function are:

  • directory - the filepath of the target directory we're loading images in from.

  • image_size - the target size of the images we're going to load in (height, width).

  • batch_size - the batch size of the images we're going to load in. For example, if the batch_size is 32 (the default), batches of 32 images and labels at a time will be passed to the model.

There are more we could play around with if we needed to in the tf.keras.preprocessing documentation.

Let's check the training dataset type, shape etc.

# Check the training data datatype
train_data_10_percent

In the above output:

  • (None, 224, 224, 3) refers to the tensor shape of our images where None is the batch size, 224 is the height (and width) and 3 is the color channels (red, green, blue).

  • (None, 10) refers to the tensor shape of the labels where None is the batch size and 10 is the number of possible labels (the 10 different food classes).

  • Both image tensors and labels are of the datatype tf.float32.

The batch_size is None due to it only being used during model training. You can think of a placeholder waiting to be filled with the batch_size parameter from image_dataset_from_directory().

Another benefit of using the tf.data.Dataset API is the associated method that comes with it.


For example, if we want to find the name of the classes we were working with, we could use the class_names attribute.

# Check out the class names of our dataset
train_data_10_percent.class_names








if we wanted to see an example batch of data, we could use the take() method.

# See an example batch of data
for images, labels in train_data_10_percent.take(1):
  print(images, labels)









------

------








Notice how the image arrays come out as tensors of pixel values where as the labels come out as one-hot encodings (e.g. [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.] for hamburger).


Building a transfer learning model using the Keras Functional API


To do so we're going to be using the tf.keras.applications module as it contains a series of already trained (on ImageNet) computer vision models as well as the Keras Functional API to construct our model.

We're going to go through the following steps:

  1. Instantiate a pre-trained base model object by choosing a target model such as EfficientNetB0 from tf.keras.applications, setting the include_top parameter to False (we do this because we're going to create our own top, which are the output layers for the model).

  2. Set the base model's trainable attribute to False to freeze all of the weights in the pre-trained model.

  3. Define an input layer for our model, for example, what shape of data should our model expect?

  4. [Optional] Normalize the inputs to our model if requires. Some computer vision models such as ResNetV250 require their inputs to be between 0 & 1.

  5. Pass the inputs to the base model.

  6. Pool the outputs of the base model into a shape compatible with the output activation layer (turn base model output tensors into same shape as label tensors). This can be done using tf.keras.layers.GlobalAveragePooling2D() or tf.keras.layers.GlobalMaxPooling2D() though the former is more common in practice.

  7. Create an output activation layer using tf.keras.layers.Dense() with the appropriate activation function and number of neurons.

  8. Combine the inputs and outputs layer into a model using tf.keras.Model().

  9. Compile the model using the appropriate loss function and choose of optimizer.

  10. Fit the model for desired number of epochs and with necessary callbacks (in our case, we'll start off with the TensorBoard callback).

Let's create the base model

# 1. Create base model with tf.keras.applications
base_model = tf.keras.applications.EfficientNetB0(include_top=False)

# 2. Freeze the base model (so the pre-learned patterns remain)
base_model.trainable = False

# 3. Create inputs into the base model
inputs = tf.keras.layers.Input(shape=(224, 224, 3), name="input_layer")

# 4. If using ResNet50V2, add this to speed up convergence, remove for EfficientNet
# x = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(inputs)

# 5. Pass the inputs to the base_model (note: using tf.keras.applications, EfficientNet inputs don't have to be normalized)
x = base_model(inputs)
# Check data shape after passing it to base_model
print(f"Shape after base_model: {x.shape}")

# 6. Average pool the outputs of the base model (aggregate all the most important information, reduce number of computations)
x = tf.keras.layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)
print(f"After GlobalAveragePooling2D(): {x.shape}")

# 7. Create the output activation layer
outputs = tf.keras.layers.Dense(10, activation="softmax", name="output_layer")(x)

# 8. Combine the inputs with the outputs into a model
model_0 = tf.keras.Model(inputs, outputs)

# 9. Compile the model
model_0.compile(loss='categorical_crossentropy',
              optimizer=tf.keras.optimizers.Adam(),
              metrics=["accuracy"])

# 10. Fit the model (we use less steps for validation so it's faster)
history_10_percent = model_0.fit(train_data_10_percent,
                                 epochs=5,
                                 steps_per_epoch=len(train_data_10_percent),
                                 validation_data=test_data_10_percent,
                                 # Go through less of the validation data so epochs are faster (we want faster experiments!)
                                 validation_steps=int(0.25 * len(test_data_10_percent)), 
                                 # Track our model's training logs for visualization later
                                 callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_feature_extract")])

It seems training our model performs incredibly well on both the training (87%+ accuracy) and test sets (~83% accuracy). Now we need to inspect the model, start with base model.

# Check layers in our base model
for layer_number, layer in enumerate(base_model.layers):
  print(layer_number, layer.name)



















--------

--------














That's a lot of layers... to hand code all of those would've taken a fairly long time to do, yet we can still take advantage of them thanks to the power of transfer learning.


Let's find the summary of the base model

base_model.summary()

--------

--------

You can see how each of the different layers has a certain number of parameters. Since we are using a pre-trained model, you can think of all of these parameters are patterns the base model has learned on another dataset. And because we set base_model.trainable = False, these patterns remain as they are during training (they're frozen and don't get updated).

Let's see the summary of our overall model.

# Check summary of model constructed with Functional API
model_0.summary()














Our overall model has five layers but really, one of those layers (efficientnetb0) has 236 layers. You can see how the output shape started out as (None, 224, 224, 3) for the input layer (the shape of our images) but was transformed to be (None, 10) by the output layer (the shape of our labels), where None is the placeholder for the batch size.


Getting a feature vector from a trained model


The tf.keras.layers.GlobalAveragePooling2D() layer transforms a 4D tensor into a 2D tensor by averaging the values across the inner-axes. Let's see an example

# Define input tensor shape (same number of dimensions as the output of efficientnetb0)
input_shape = (1, 4, 4, 3)

# Create a random tensor
tf.random.set_seed(42)
input_tensor = tf.random.normal(input_shape)
print(f"Random input tensor:\n {input_tensor}\n")

# Pass the random tensor through a global average pooling 2D layer
global_average_pooled_tensor = tf.keras.layers.GlobalAveragePooling2D()(input_tensor)
print(f"2D global average pooled random tensor:\n {global_average_pooled_tensor}\n")

# Check the shapes of the different tensors
print(f"Shape of input tensor: {input_tensor.shape}")
print(f"Shape of 2D global averaged pooled input tensor: {global_average_pooled_tensor.shape}")


















You can see the tf.keras.layers.GlobalAveragePooling2D() layer condensed the input tensor from shape (1, 4, 4, 3) to (1, 3). It did so by averaging the input_tensor across the middle two axes.


Running a series of transfer learning experiments


We've seen the incredible results of transfer learning on 10% of the training data, what about 1% of the training data?

Why don't we answer that question while running the following modeling experiments:

  1. Use feature extraction transfer learning on 1% of the training data with data augmentation.

  2. Use feature extraction transfer learning on 10% of the training data with data augmentation.

  3. Use fine-tuning transfer learning on 10% of the training data with data augmentation.

  4. Use fine-tuning transfer learning on 100% of the training data with data augmentation.

While all of the experiments will be run on different versions of the training data, they will all be evaluated on the same test dataset, this ensures the results of each experiment are as comparable as possible.

All experiments will be done using the EfficientNetB0 model within the tf.keras.applications module.


Let's begin by downloading the data for experiment 1, using feature extraction transfer learning on 1% of the training data with data augmentation.

# Download and unzip data
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_1_percent.zip
unzip_data("10_food_classes_1_percent.zip")

# Create training and test dirs
train_dir_1_percent = "10_food_classes_1_percent/train/"
test_dir = "10_food_classes_1_percent/test/"

How many images are we working with?

# Walk through 1 percent data directory and list number of files
walk_through_dir("10_food_classes_1_percent")














Time to load our images in as tf.data.Dataset objects, to do so, we'll use the image_dataset_from_directory() method.

import tensorflow as tf
IMG_SIZE = (224, 224)
train_data_1_percent = tf.keras.preprocessing.image_dataset_from_directory(train_dir_1_percent,                                                                    
                                               label_mode="categorical",                                                                         
                                               batch_size=32, # default
                                                                           image_size=IMG_SIZE)
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
                                                 label_mode="categorical",
                                                 image_size=IMG_SIZE)


Adding data augmentation right into the model

Data loaded, time to augment it. Using the tf.keras.layers.experimental.preprocessing module and creating a dedicated data augmentation layer.


The data augmentation transformations we're going to use are:

  • RandomFlip - flips image on the horizontal or vertical axis.

  • RandomRotation - randomly rotates the image by a specified amount.

  • RandomZoom - randomly zooms into an image by a specified amount.

  • RandomHeight - randomly shifts image height by a specified amount.

  • RandomWidth - randomly shifts image width by a specified amount.

  • Rescaling - normalizes the image pixel values to be between 0 and 1, this is worth mentioning because it is required for some image models but since we're using the tf.keras.applications implementation of EfficientNetB0, it's not required.

Feature extraction transfer learning on 1% of the data with data augmentation

from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing

# Setup input shape and base model, freezing the base model layers
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False

# Create input layer
inputs = layers.Input(shape=input_shape, name="input_layer")

# Add in data augmentation Sequential model as a layer
x = data_augmentation(inputs)

# Give base_model inputs (after augmentation) and don't train it
x = base_model(x, training=False)

# Pool output features of base model
x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)

# Put a dense layer on as the output
outputs = layers.Dense(10, activation="softmax", name="output_layer")(x)

# Make a model with inputs and outputs
model_1 = keras.Model(inputs, outputs)

# Compile the model
model_1.compile(loss="categorical_crossentropy",
              optimizer=tf.keras.optimizers.Adam(),
              metrics=["accuracy"])

# Fit the model
history_1_percent = model_1.fit(train_data_1_percent,
                    epochs=5,
                    steps_per_epoch=len(train_data_1_percent),
                    validation_data=test_data,
                    validation_steps=int(0.25* len(test_data)), # validate for less steps
                    # Track model training logs
                    callbacks=[create_tensorboard_callback("transfer_learning", "1_percent_data_aug")])





Using only 7 training images per class, using transfer learning our model was able to get ~40% accuracy on the validation set. This result is pretty amazing since the original Food-101 paper achieved 50.67% accuracy with all the data, namely, 750 training images per class.

The important thing to remember is data augmentation only runs during training. So if we were to evaluate or use our model for inference (predicting the class of an image) the data augmentation layers will be automatically turned off.

# Evaluate on the test data
results_1_percent_data_aug = model_1.evaluate(test_data)
results_1_percent_data_aug

The results here may be slightly better/worse than the log outputs of our model during training because during training we only evaluate our model on 25% of the test data using the line validation_steps=int(0.25 * len(test_data)). Doing this speeds up our epochs but still gives us enough of an idea of how our model is going.

After many experiments we would try the 5th experiment, that is fine-tuning an existing model all of the data


Fine-tuning an existing model all of the data


Let's try all data from 10 food classes

# Download and unzip 10 classes of data with all images
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_all_data.zip 
unzip_data("10_food_classes_all_data.zip")

# Setup data directories
train_dir = "10_food_classes_all_data/train/"
test_dir = "10_food_classes_all_data/test/"

# How many images are we working with now?
walk_through_dir("10_food_classes_all_data")

And now we'll turn the images into tensors datasets.

# Setup data inputs
import tensorflow as tf
IMG_SIZE = (224, 224)
train_data_10_classes_full = tf.keras.preprocessing.image_dataset_from_directory(train_dir,
                                                                                 label_mode="categorical",
                                                                                 image_size=IMG_SIZE)

# Note: this is the same test dataset we've been using for the previous modelling experiments
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
                                                                label_mode="categorical",
                                                                image_size=IMG_SIZE)



Oh, this is looking good. We've got 10x more images in of the training classes to work with.

The test dataset is the same we've been using for our previous experiments.

As it is now, our model_2 has been fine-tuned on 10 percent of the data, so to begin fine-tuning on all of the data and keep our experiments consistent, we need to revert it back to the weights we checkpointed after 5 epochs of feature-extraction.


Let's create a model

# Create a functional model with data augmentation
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow.keras.models import Sequential

# Build data augmentation layer
data_augmentation = Sequential([
  preprocessing.RandomFlip('horizontal'),
  preprocessing.RandomHeight(0.2),
  preprocessing.RandomWidth(0.2),
  preprocessing.RandomZoom(0.2),
  preprocessing.RandomRotation(0.2),
  # preprocessing.Rescaling(1./255) # keep for ResNet50V2, remove for EfficientNet                 
], name="data_augmentation")

# Setup the input shape to our model
input_shape = (224, 224, 3)

# Create a frozen base model
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False

# Create input and output layers
inputs = layers.Input(shape=input_shape, name="input_layer") # create input layer
x = data_augmentation(inputs) # augment our training images
x = base_model(x, training=False) # pass augmented images to base model but keep it in inference mode, so batchnorm layers don't get updated: https://keras.io/guides/transfer_learning/#build-a-model 
x = layers.GlobalAveragePooling2D(name="global_average_pooling_layer")(x)
outputs = layers.Dense(10, activation="softmax", name="output_layer")(x)
model_2 = tf.keras.Model(inputs, outputs)

# Compile
model_2.compile(loss="categorical_crossentropy",
              optimizer=tf.keras.optimizers.Adam(lr=0.001), # use Adam optimizer with base learning rate
              metrics=["accuracy"])

Creating a ModelCheckpoint callback


Our model is compiled and ready to be fit, so why haven't we fit it yet?

Well, for this experiment we're going to introduce a new callback, the ModelCheckpoint callback. The ModelCheckpoint callback gives you the ability to save your model, as a whole in the SavedModel for mat or the weights (patterns) only to a specified directory as it trains. This is helpful if you think your model is going to be training for a long time and you want to make backups of it as it trains. It also means if you think your model could benefit from being trained for longer, you can reload it from a specific checkpoint and continue training from there.

For example, say you fit a feature extraction transfer learning model for 5 epochs and you check the training curves and see it was still improving and you want to see if fine-tuning for another 5 epochs could help, you can load the checkpoint, unfreeze some (or all) of the base model layers and then continue training.


But first, let's create a ModelCheckpoint callback. To do so, we have to specify a directory we'd like to save to.

# Setup checkpoint path
checkpoint_path = "ten_percent_model_checkpoints_weights/checkpoint.ckpt" # note: remember saving directly to Colab is temporary

# Create a ModelCheckpoint callback that saves the model's weights only
checkpoint_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
                                                         save_weights_only=True, # set to False to save the entire model
                                                         save_best_only=False, # set to True to save only the best model instead of a model every epoch 
                                                         save_freq="epoch", # save every epoch
                                                         verbose=1)

The SavedModel format saves a model's architecture, weights, and training configuration all in one folder. It makes it very easy to reload your model exactly how it is elsewhere. However, if you do not want to share all of these details with others, you may want to save and share the weights only (these will just be large tensors of non-human interpretable numbers). If disk space is an issue, saving the weights only is faster and takes up less space than saving the whole model.

Time to fit the model. Because we're going to be fine-tuning it later, we'll create a variable initial_epochs and set it to 5 to use later.

history_10_percent_data_aug = model_2.fit(train_data_10_percent,
                                          epochs=initial_epochs,
                                          validation_data=test_data,
                                          validation_steps=int(0.25 * len(test_data)), # do less steps per validation (quicker)
                                          callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_data_aug"), 
                                                     checkpoint_callback])

Let's evaluate the model

# Evaluate model (this is the fine-tuned 10 percent of data version)
model_2.evaluate(test_data)

Alright, the previous steps might seem quite confusing but all we've done is:

  1. Trained a feature extraction transfer learning model for 5 epochs on 10% of the data (with all base model layers frozen) and saved the model's weights using ModelCheckpoint.

  2. Fine-tuned the same model on the same 10% of the data for a further 5 epochs with the top 10 layers of the base model unfrozen.

  3. Saved the results and training logs each time.

  4. Reloaded the model from 1 to do the same steps as 2 but with all of the data.

we're going to fine-tune the last 10 layers of the base model with the full dataset for another 5 epochs but first let's remind ourselves which layers are trainable.

# Check which layers are tuneable in the whole model
for layer_number, layer in enumerate(model_2.layers):
  print(layer_number, layer.name, layer.trainable)





To begin fine-tuning, we'll unfreeze the entire base model by setting its trainable attribute to True. Then we'll refreeze every layer in the base model except for the last 10 by looping through them and setting their trainable attribute to False. Finally, we'll recompile the model.

base_model.trainable = True

# Freeze all layers except for the
for layer in base_model.layers[:-10]:
  layer.trainable = False

# Recompile the model (always recompile after any adjustments to a model)
model_2.compile(loss="categorical_crossentropy",
              optimizer=tf.keras.optimizers.Adam(lr=0.0001), # lr is 10x lower than before for fine-tuning
              metrics=["accuracy"])

Can we get a little more specific?















------

------













It seems all layers except for the last 10 are frozen and untrainable. This means only the last 10 layers of the base model along with the output layer will have their weights updated during training. In our case, we're using the exact same loss, optimizer, and metrics as before, except this time the learning rate for our optimizer will be 10x smaller than before (0.0001 instead of Adam's default of 0.001). We do this so the model doesn't try to overwrite the existing weights in the pre-trained model too fast. In other words, we want learning to be more gradual.


Time to fine-tune!

We're going to continue training from where our previous model finished. Since it trained for 5 epochs, our fine-tuning will begin on epoch 5 and continue for another 5 epochs.

To do this, we can use the initial_epoch parameter of the fit() method. We'll pass it to the last epoch of the previous model's training history (history_10_percent_data_aug.epoch[-1]).

# Fine tune for another 5 epochs

fine_tune_epochs = initial_epochs + 5

# Refit the model (same as model_2 except with more trainable layers)
history_fine_10_percent_data_aug = model_2.fit(train_data_10_percent,
                                               epochs=fine_tune_epochs,
                                               validation_data=test_data,
                                               initial_epoch=history_10_percent_data_aug.epoch[-1], # start from previous last epoch
                                               validation_steps=int(0.25 * len(test_data)),
                                               callbacks=[create_tensorboard_callback("transfer_learning", "10_percent_fine_tune_last_10")]) # name experiment appropriately

Let's evaluate it.

# Evaluate the model on the test data
results_fine_tune_10_percent = model_2.evaluate(test_data)

Remember, the results from evaluating the model might be slightly different to the outputs from training since during training we only evaluate on 25% of the test data.

Alright, we need a way to evaluate our model's performance before and after fine-tuning. How about we write a function to compare the before and after?

def compare_historys(original_history, new_history, initial_epochs=5):
    """
    Compares two model history objects.
    """
    # Get original history measurements
    acc = original_history.history["accuracy"]
    loss = original_history.history["loss"]

    print(len(acc))

    val_acc = original_history.history["val_accuracy"]
    val_loss = original_history.history["val_loss"]

    # Combine original history with new history
    total_acc = acc + new_history.history["accuracy"]
    total_loss = loss + new_history.history["loss"]

    total_val_acc = val_acc + new_history.history["val_accuracy"]
    total_val_loss = val_loss + new_history.history["val_loss"]

    print(len(total_acc))
    print(total_acc)

    # Make plots
    plt.figure(figsize=(8, 8))
    plt.subplot(2, 1, 1)
    plt.plot(total_acc, label='Training Accuracy')
    plt.plot(total_val_acc, label='Validation Accuracy')
    plt.plot([initial_epochs-1, initial_epochs-1],
              plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs
    plt.legend(loc='lower right')
    plt.title('Training and Validation Accuracy')

    plt.subplot(2, 1, 2)
    plt.plot(total_loss, label='Training Loss')
    plt.plot(total_val_loss, label='Validation Loss')
    plt.plot([initial_epochs-1, initial_epochs-1],
              plt.ylim(), label='Start Fine Tuning') # reshift plot around epochs
    plt.legend(loc='upper right')
    plt.title('Training and Validation Loss')
    plt.xlabel('epoch')
    plt.show()

This is where saving the history variables of our model training comes in handy. Let's see what happened after fine-tuning the last 10 layers of our model.

compare_historys(original_history=history_10_percent_data_aug, 
                 new_history=history_fine_10_percent_data_aug, 
                 initial_epochs=5)

Seems like the curves are heading in the right direction after fine-tuning. But remember, it should be noted that fine-tuning usually works best with larger amounts of data.


Fine-tuning an existing model all of the data


We'll start by downloading the full version of our 10 food classes dataset.

# Download and unzip 10 classes of data with all images
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_all_data.zip 
unzip_data("10_food_classes_all_data.zip")

# Setup data directories
train_dir = "10_food_classes_all_data/train/"
test_dir = "10_food_classes_all_data/test/"

And now we'll turn the images into tensors datasets.

# Setup data inputs
import tensorflow as tf
IMG_SIZE = (224, 224)
train_data_10_classes_full = tf.keras.preprocessing.image_dataset_from_directory(train_dir,
                                                                                 label_mode="categorical",
                                                                                 image_size=IMG_SIZE)

# Note: this is the same test dataset we've been using for the previous modelling experiments
test_data = tf.keras.preprocessing.image_dataset_from_directory(test_dir,
                                                                label_mode="categorical",
                                                                image_size=IMG_SIZE)



This is looking good. We've got 10x more images in of the training classes to work with.

The test dataset is the same we've been using for our previous experiments.

Let's evaluate

# Evaluate model (this is the fine-tuned 10 percent of data version)
model_2.evaluate(test_data)

Now we'll revert the model back to the saved weights.

# Load model from checkpoint, that way we can fine-tune from the same stage the 10 percent data model was fine-tuned from
model_2.load_weights(checkpoint_path) # revert model back to saved weights

Let's compile the model


# Compile
model_2.compile(loss="categorical_crossentropy",
                optimizer=tf.keras.optimizers.Adam(lr=0.0001), # divide learning rate by 10 for fine-tuning
                metrics=["accuracy"])

Time to fine-tune on all of the data!


# Continue to train and fine-tune the model to our data
fine_tune_epochs = initial_epochs + 5

history_fine_10_classes_full = model_2.fit(train_data_10_classes_full,
                                           epochs=fine_tune_epochs,
                                           initial_epoch=history_10_percent_data_aug.epoch[-1],
                                           validation_data=test_data,
                                           validation_steps=int(0.25 * len(test_data)),
                                           callbacks=[create_tensorboard_callback("transfer_learning", "full_10_classes_fine_tune_last_10")])

It looks like fine-tuning with all of the data has given our model a boost, how do the training curves look?

# How did fine-tuning go with more data?
compare_historys(original_history=history_10_percent_data_aug,
                 new_history=history_fine_10_classes_full,
                 initial_epochs=5)






















Looks like that extra data helped! Those curves are looking great. And if we trained for longer, they might even keep improving.



  • Writer: Sumit Dey
    Sumit Dey
  • Mar 11, 2022
  • 10 min read

Updated: Mar 22, 2022

In our previous blogs, we've built a bunch of convolutional neural networks from scratch and they all seem to be learning, however, there is still plenty of room for improvement. To improve our model(s), we could spend a while trying different configurations, adding more layers, changing the learning rate, adjusting the number of neurons per layer, and more, doing all this is very time-consuming.

We're lucky that there's a technique to save time, it is called transfer learning, in other words, taking the patterns (also called weights) another model has learned from another problem and using them for our own problem.

Here is the befit of using transfer learning

  1. Can leverage an existing neural network architecture proven to work on problems similar to our own.

  2. Can leverage a working neural network architecture that has already learned patterns on similar data to our own. This often results in achieving great results with fewer custom data.

What this means is, instead of hand-crafting our own neural network architectures or building them from scratch, we can utilize models which have worked for others.


And instead of training our own models from scratch on our own datasets, we can take the patterns a model has learned from datasets such as ImageNet (millions of images of different objects) and use them as the foundation of our own. Doing this often leads to getting great results with less data.


Transfer learning with TensorFlow Hub


For many of the problems you'll want to use deep learning for, chances are, a working model already exists. And the good news is, you can access many of them on TensorFlow Hub.

TensorFlow Hub is a repository for existing model components. It makes it so you can import and use a fully trained model with as little as a URL. e could get much of the same results (or better) than our best model has gotten so far with only 10% of the original data, in other words, 10x fewer data.

Transfer learning often allows you to get great results with less data. et's download a subset of the data we've been using, namely 10% of the training data from the 10_food_classes dataset and use it to train a food image classifier.


Let's download the data and execute it with the existing model


Download data


Let's get the data from the Food-101 dataset. we would work with 10 percent of the data.

# Get data (10% of labels)
import zipfile

# Download data
!wget https://storage.googleapis.com/ztm_tf_course/food_vision/10_food_classes_10_percent.zip

# Unzip the downloaded file
zip_ref = zipfile.ZipFile("10_food_classes_10_percent.zip", "r")
zip_ref.extractall()
zip_ref.close()

Let's find out how many images are in the folder

# How many images in each folder?
import os

# Walk through 10 percent data directory and list number of files
for dirpath, dirnames, filenames in os.walk("10_food_classes_10_percent"):
  print(f"There are {len(dirnames)} directories and {len(filenames)} images in '{dirpath}'.")

Notice how each of the training directories now has 75 images rather than 750 images. This is key to demonstrating how well transfer learning can perform with less labeled images.

The test directories still have the same amount of images. This means we'll be training on less data but evaluating our models on the same amount of test data.


Preparing the data


Now we've downloaded the data, let's use the ImageDataGenerator class along with the flow_from_directory method to load in our images.

# Setup data inputs
from tensorflow.keras.preprocessing.image import ImageDataGenerator

IMAGE_SHAPE = (224, 224)
BATCH_SIZE = 32

train_dir = "10_food_classes_10_percent/train/"
test_dir = "10_food_classes_10_percent/test/"

train_datagen = ImageDataGenerator(rescale=1/255.)
test_datagen = ImageDataGenerator(rescale=1/255.)

print("Training images:")
train_data_10_percent = train_datagen.flow_from_directory(train_dir,
                                               target_size=IMAGE_SHAPE,
                                               batch_size=BATCH_SIZE,
                                               class_mode="categorical")

print("Testing images:")
test_data = train_datagen.flow_from_directory(test_dir,
                                              target_size=IMAGE_SHAPE,
                                              batch_size=BATCH_SIZE,
                                              class_mode="categorical")

Loading in the data we can see we've got 750 images in the training dataset belonging to 10 classes (75 per class) and 2500 images in the test set belonging to 10 classes (250 per class).


Setting up callbacks


What are callbacks? Callbacks are extra functionality you can add to your models to be performed during or after training. Some of the most popular callbacks include:


  • Experiment tracking with TensorBoard - log the performance of multiple models and then view and compare these models in a visual way on TensorBoard (a dashboard for inspecting neural network parameters). Helpful to compare the results of different models on your data.

  • Model checkpointing - save your model as it trains so you can stop training if needed and come back to continue off where you left. Helpful if training takes a long time and can't be done in one sitting.

  • Early stopping - leave your model training for an arbitrary amount of time and have it stop training automatically when it ceases to improve. Helpful when you've got a large dataset and don't know how long training will take.

The TensorBoard callback can be accessed using tf.keras.callbacks.TensorBoard().

Its main functionality is saving a model's training performance metrics to a specified log_dir.

By default, logs are recorded every epoch using the update_freq='epoch' parameter. This is a good default since tracking model performance too often can slow down model training.

To track our modeling experiments using TensorBoard, let's create a function that creates a TensorBoard callback for us.

# Create tensorboard callback (functionized because need to create a new one for each model)
import datetime
def create_tensorboard_callback(dir_name, experiment_name):
  log_dir = dir_name + "/" + experiment_name + "/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
  tensorboard_callback = tf.keras.callbacks.TensorBoard(
      log_dir=log_dir
  )
  print(f"Saving TensorBoard log files to: {log_dir}")
  return tensorboard_callback

Because you're likely to run multiple experiments, it's a good idea to be able to track them in some way. In our case, our function saves a model's performance logs to a directory named [dir_name]/[experiment_name]/[current_timestamp], where:

  • dir_name is the overall logs directory

  • experiment_name is the particular experiment

  • current_timestamp is the time the experiment started based on Python's datetime.datetime().now()

Creating models using TensorFlow Hub


In the past, we've used TensorFlow to create our own model layer by layer from scratch.

Now we're going to do a similar process, except the majority of our model's layers are going to come from TensorFlow Hub.

In fact, we're going to use two models from TensorFlow Hub:

  1. ResNetV2 - a state-of-the-art computer vision model architecture from 2016.

  2. EfficientNet - a state-of-the-art computer vision architecture from 2019.

State of the art means that at some point, both of these models have achieved the lowest error rate on ImageNet (ILSVRC-2012-CLS), the gold standard of computer vision benchmarks.

How do you find the TensorFlow Hub? Here are the steps


  1. Go to tfhub.dev.

  2. Choose your problem domain, e.g. "Image" (we're using food images).

  3. Select your TF version, which in our case is TF2.

  4. Remove all "Problem domanin" filters except for the problem you're working on.

  5. The models listed are all models which could potentially be used for your problem.

To find our models, let's narrow down our search using the Architecture tab.

  1. Select the Architecture tab on TensorFlow Hub and you'll see a dropdown menu of architecture names appear.

    • The rule of thumb here is generally, names with larger numbers mean better performing models. For example, EfficientNetB4 performs better than EfficientNetB0.

      • However, the tradeoff with larger numbers can mean they take longer to compute.

This is where the different types of transfer learning come into play, as is, feature extraction and fine-tuning.


1. "As is" transfer learning is when you take a pre-trained model as it is and apply it to your task without any changes. For example, many computer vision models are pre-trained on the ImageNet dataset which contains 1000 different classes of images. This means passing a single image to this model will produce 1000 different prediction probability values (1 for each class).

  • This is helpful if you have 1000 classes of the image you'd like to classify and they're all the same as the ImageNet classes, however, it's not helpful if you want to classify only a small subset of classes (such as 10 different kinds of food). Model's with "/classification" in their name on TensorFlow Hub provide this kind of functionality.

2. Feature extraction transfer learning is when you take the underlying patterns (also called weights) a pre-trained model has learned and adjust its outputs to be more suited to your problem.

  • For example, say the pre-trained model you were using had 236 different layers (EfficientNetB0 has 236 layers), but the top layer outputs 1000 classes because it was pre-trained on ImageNet. To adjust this to your own problem, you might remove the original activation layer and replace it with your own but with the right number of output classes. The important part here is that only the top few layers become trainable, the rest remain frozen.

    • This way all the underlying patterns remain in the rest of the layers and you can utilize them for your own problem. This kind of transfer learning is very helpful when your data is similar to the data a model has been pre-trained on.


3. Fine-tuning transfer learning is when you take the underlying patterns (also called weights) of a pre-trained model and adjust (fine-tune) them to your own problem.

  • This usually means training some, many, or all of the layers in the pre-trained model. This is useful when you've got a large dataset (e.g. 100+ images per class) where your data is slightly different from the data the original model was trained on.


A common workflow is to "freeze" all of the learned patterns in the bottom layers of a pre-trained model so they're untrainable. And then train the top 2-3 layers of so the pre-trained model can adjust its outputs to your custom data (feature extraction).

After you've trained the top 2-3 layers, you can then gradually "unfreeze" more and more layers and run the training process on your own data to further fine-tune the pre-trained model.

Now we'll get the feature vector URLs of two common computer vision architectures, EfficientNetB0 (2019) and ResNetV250 (2016) from TensorFlow Hub using the steps above.

We're getting both of these because we're going to compare them to see which performs better on our data.

Comparing different model architecture performances on the same data is a very common practice. The simple reason is that you want to know which model performs best for your problem.

# Resnet 50 V2 feature vector
resnet_url = "https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4"

# Original: EfficientNetB0 feature vector (version 1)
efficientnet_url = "https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1"

# # New: EfficientNetB0 feature vector (version 2)
# efficientnet_url = "https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2"

These URLs link to a saved pre-trained model on TensorFlow Hub. When we use them in our model, the model will automatically be downloaded for us to use. To do this, we can use the KerasLayer() model inside the TensorFlow hub library.


Since we're going to be comparing two models, to save ourselves code, we'll create a function create_model(). This function will take a model's TensorFlow Hub URL, instantiate a Keras Sequential model with the appropriate number of output layers and return the model.

import tensorflow as tf
import tensorflow_hub as hub
from tensorflow.keras import layers  
def create_model(model_url, num_classes=10):
  """Takes a TensorFlow Hub URL and creates a Keras Sequential model with it.
  
  Args:
    model_url (str): A TensorFlow Hub feature extraction URL.
    num_classes (int): Number of output neurons in output layer,
      should be equal to number of target classes, default 10.

  Returns:
    An uncompiled Keras Sequential model with model_url as feature
    extractor layer and Dense output layer with num_classes outputs.
  """
  # Download the pretrained model and save it as a Keras layer
  feature_extractor_layer = hub.KerasLayer(model_url,
                                           trainable=False, # freeze the underlying patterns
                                           name='feature_extraction_layer',
                                           input_shape=IMAGE_SHAPE+(3,)) # define the input image shape
  
  # Create our own model
  model = tf.keras.Sequential([
    feature_extractor_layer, # use the feature extraction layer as the base
    layers.Dense(num_classes, activation='softmax', name='output_layer') # create our own output layer      
  ])

  return model

Now we've got a function for creating a model, we'll use it to first create a model using the ResNetV250 architecture as our feature extraction layer.


Once the model is instantiated, we'll compile it using categorical_crossentropy as our loss function, the Adam optimizer, and accuracy as our metric.

# Create model
resnet_model = create_model(resnet_url, num_classes=train_data_10_percent.num_classes)

# Compile
resnet_model.compile(loss='categorical_crossentropy',
                     optimizer=tf.keras.optimizers.Adam(),
                     metrics=['accuracy'])

Now we need to fit the model. We've got the training data ready in train_data_10_percent as well as the test data saved as test_data. But before we call the fit function, there's one more thing we're going to add, a callback. More specifically, a TensorBoard callback so we can track the performance of our model on TensorBoard. We can add a callback to our model by using the callbacks parameter in the fit function. In our case, we'll pass the callbacks parameter the create_tensorboard_callback() we created earlier with some specific inputs so we know what experiments we're running.

Let's keep the experiment short and train for 5 epochs

# Fit the model
resnet_history = resnet_model.fit(train_data_10_percent,
                                  epochs=5,
                                  steps_per_epoch=len(train_data_10_percent),
                                  validation_data=test_data,
                                  validation_steps=len(test_data),
                                  # Add TensorBoard callback to model (callbacks parameter takes a list)
                                  callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", # save experiment logs here
                                                                         experiment_name="resnet50V2")]) # name of log files                                                                                                        

It seems that after only 5 epochs, the ResNetV250 feature extraction model was able to blow any of the architectures we made out of the water, achieving around 90% accuracy on the training set and nearly 80% accuracy on the test set...with only 10 percent of the training images!


That goes to show the power of transfer learning. And it's one of the main reasons whenever you're trying to model your own datasets, you should look into what pre-trained models already exist.


Let's check out our model's training curves using our plot_loss_curves function.

# If you wanted to, you could really turn this into a helper function to load in with a helper.py script...
import matplotlib.pyplot as plt

# Plot the validation and training data separately
def plot_loss_curves(history):
  """
  Returns separate loss curves for training and validation metrics.
  """ 
  loss = history.history['loss']
  val_loss = history.history['val_loss']

  accuracy = history.history['accuracy']
  val_accuracy = history.history['val_accuracy']

  epochs = range(len(history.history['loss']))

  # Plot loss
  plt.plot(epochs, loss, label='training_loss')
  plt.plot(epochs, val_loss, label='val_loss')
  plt.title('Loss')
  plt.xlabel('Epochs')
  plt.legend()

  # Plot accuracy
  plt.figure()
  plt.plot(epochs, accuracy, label='training_accuracy')
  plt.plot(epochs, val_accuracy, label='val_accuracy')
  plt.title('Accuracy')
  plt.xlabel('Epochs')
  plt.legend();
plot_loss_curves(resnet_history)
























Okay, we've trained a ResNetV250 model, time to do the same with the EfficientNetB0 model. The setup will be the exact same as before, except for the model_url parameter in the create_model() function and the experiment_name parameter in the create_tensorboard_callback() function.

# Create model
efficientnet_model = create_model(model_url=efficientnet_url, # use EfficientNetB0 TensorFlow Hub URL
                                  num_classes=train_data_10_percent.num_classes)

# Compile EfficientNet model
efficientnet_model.compile(loss='categorical_crossentropy',
                           optimizer=tf.keras.optimizers.Adam(),
                           metrics=['accuracy'])

# Fit EfficientNet model 
efficientnet_history = efficientnet_model.fit(train_data_10_percent, # only use 10% of training data
                                              epochs=5, # train for 5 epochs
                                              steps_per_epoch=len(train_data_10_percent),
                                              validation_data=test_data,
                                              validation_steps=len(test_data),
                                              callbacks=[create_tensorboard_callback(dir_name="tensorflow_hub", 
                                                                                     # Track logs under different experiment name
                                                                                     experiment_name="efficientnetB0")])

The EfficientNetB0 model does even better than the ResNetV250 model! Achieving over 85% accuracy on the test set...again with only 10% of the training data.

How cool is that? With a couple of lines of code, we're able to leverage state-of-the-art models and adjust them to our own use case.

plot_loss_curves(efficientnet_history)






















From the look of the EfficientNetB0 model's loss curves, it looks like if we kept training our model for longer, it might improve even further. Perhaps that's something you might want to predict?


Make a Prediction


Let's try the same image from the previous section and try to predict

# Create a function to import an image and resize it to be able to be used with our model
def load_and_prep_image(filename, img_shape=224):
  """
  Reads an image from filename, turns it into a tensor
  and reshapes it to (img_shape, img_shape, colour_channel).
  """
  # Read in target file (an image)
  img = tf.io.read_file(filename)

  # Decode the read file into a tensor & ensure 3 colour channels 
  # (our model is trained on images with 3 colour channels and sometimes images have 4 colour channels)
  img = tf.image.decode_image(img, channels=3)

  # Resize the image (to the same size our model was trained on)
  img = tf.image.resize(img, size = [img_shape, img_shape])

  # Rescale the image (get all values between 0 and 1)
  img = img/255.
  return img
# Adjust function to work with multi-class
def pred_and_plot(model, filename, class_names):
  """
  Imports an image located at filename, makes a prediction on it with
  a trained model and plots the image with the predicted class as the title.
  """
  # Import the target image and preprocess it
  img = load_and_prep_image(filename)

  # Make a prediction
  pred = model.predict(tf.expand_dims(img, axis=0))

  # Get the predicted class
  if len(pred[0]) > 1: # check for multi-class
    pred_class = class_names[pred.argmax()] # if more than one output, take the max
  else:
    pred_class = class_names[int(tf.round(pred)[0][0])] # if only one output, round

  # Plot the image and predicted class
  plt.imshow(img)
  plt.title(f"Prediction: {pred_class}")
  plt.axis(False);
class_names = ['chicken_curry', 'chicken_wings', 'fried_rice', 'grilled_salmon','hamburger', 'ice_cream', 'pizza', 'ramen', 'steak', 'sushi']

Let's try out the prediction again

pred_and_plot(efficientnet_model, "03-hamburgerandfries.jpeg", class_names)














Wow!!! now the prediction is correct, it's "hamburger". In our custom model, it was predicted "steak", let's try more images

pred_and_plot(efficientnet_model, "03-sushi.jpeg", class_names)














Again prediction is correct, it's "Sushi". In our custom model, it was predicted "ramen"


This is the power of transfer learning. We would discuss more transfer learning in my next Blogs. Stay Tuned.


In this article, we would discuss building enterprise applications with Sharepoint Framework(SPFx), React web part using the PNP Treeview component. PNP TreeView would allow the graphical presentation of hierarchical data. It would expand to the number of sub-items and details about subitems. In this article, we would try to build an application with a four-level of hierarchy(TreeView), after selecting any district(s)/House user able to confirm the selection, then we can able to show the document library according to the particular district/house using accordion.

Create Document Libraries

Create document libraries called House 1 and House 2 on the Sharepoint site and upload a few test documents.


Let's start the process, First, we need to create SharePoint client-side web part, please follow this document for more details.


Create a new web part using the following command

yo @microsoft/sharepoint

Install the dependency library that requires for TreeView control.

npm install @pnp/sp --save 
npm install @pnp/spfx-controls-react --save

Preparation of data


First, we need to prepare data for the hierarchy(TreeView), we need to build the following JSON format of data for this tree control.

  private testdata:any = [
  {
    key:"RegionID-1",
    label:"RegionID -1",
    subLabel: "This is a sub label for node",
    children:[
      {
        key: "TestZone-2",
        label: "ZoneID - 2",
        children: [
          {
            key:"District-3",
            label:"DistrictID -3",
            children: [
              {
                key:"1",
                label:"House 1"
              },
              {
                key:"2",
                label:"House 2"
              },
              {
                key:"3",
                label:"House 3"
              },
              {
                key:"4",
                label:"House 4"
              }
            ]
          }
        ]
      }
    ]
  }

The key would be unique for the whole structure.


Let's start with the render() function


      <div className={styles.treeview}>
        <div className={ styles.container }>
          <div className={ styles.row }>
            <div> 
          /* Specify the input search text box*/
              <input className={styles.filtertext} placeholder="Filter a specific house/district" onKeyUp={(e:any) => this.filterValue(e)}*/
          /* Now we have to define the TeeView */
               <div><Spinner>Loading items ...</Spinner></div> :
                (
                  <TreeView
                  items={this.testdata}
                  defaultExpanded={false}
                  selectionMode={TreeViewSelectionMode.Multiple}
                  selectChildrenIfParentSelected={true}
                  selectChildrenMode = {SelectChildrenMode.Select | SelectChildrenMode.Unselect }
                  showCheckboxes={true}
                  treeItemActionsDisplayMode={TreeItemActionsDisplayMode.ContextualMenu}
                  defaultSelectedKeys={this.state.selectItems}
                  expandToSelected={true}
                  defaultExpandedChildren={false}

                  onExpandCollapse={this.onExpandCollapseTree}
                  onSelect={this.onItemSelected}
                  onRenderItem={this.renderCustomTreeItem} 
                />
                )
                /* Confirm selection button */
                <button className={`${styles.goButton}} disabled={this.state.selectItems.length===0} onClick={() => this.goButton()}>Confirm Selection</button>
              </div>
              <div>
            /* Here we have to show the Accordion of document libraries */ 
              /* find out the house number from the hierarchy data */ 
return <Accordion title={value} defaultCollapsed={this.state.defaultCollapsed} className={"itemCell itembackground"} >

                            <Iframe data={'https://<domain>.sharepoint.com/sites/<sitename>/Shared Documents/'+housenumber}/> 

                </Accordion>            
              </div> 
           </div>
         </div>
       </div>

Now defined the state variables


public state = {    
    selectItems:[],
    searchItems:[],
    loading: true,
    cancelSelection: true,
    filter:null,
    timeout:0,
    sortType:'a',
    loadAccordion: true,
    defaultCollapsed: true,
 };

After selection of the component how to load the accordion after pressing the "Confirm selection button".

  goButton=()=>{
    let sb = '';//sb = [];
    for(let i=0;i<this.state.selectItems.length;i++) {
      sb=sb+this.state.selectItems[i]+',';
    }
    window.location.reload(); // Reload the page 
  }

Here is the function for searching the District/House


  filterValue=(e:any)=>{
      let keyval = e.target.value;
      this.setState({filter: keyval});
      if (e.target.value == '' || e.target.value ==null || e.target.value == undefined) {
        this.state.searchItems = [];
        clearTimeout(this.state.timeout);
      }
      else {
        
        if (this.state.timeout) {
          clearTimeout(this.state.timeout);
        }
        if (e.keyCode === 8) {
          this.state.searchItems = [];
          e.target.value = '';
          keyval = '';
        } else {
          new searchData().searchFilterData(this.testdata,keyval, this.state);
        }
    return null;
  }

When the user selects a specific item from the tree(District/House)

  onItemSelected=(items: ITreeItem[])=>{
    let itemslocal=[];
    
    for(let i=0;i<items.length;i++) {
      itemslocal[i]=items[i].key;
    this.setState({selectItems: itemslocal})
  }

Sorting the data showing to the accordion that we can put into the constructor.


  private sort(data,type?) {
    if(!data.length) {
          return data;
      }
    if (type != undefined && type.toLowerCase() === 'd') {
      return data.sort((b,a) => {
        let x = Number(a.substring(a.indexOf('-') + 1));let y = Number(b.substring(b.indexOf('-') + 1));
        return ((x < y) ? -1 : ((x > y) ? 1:0));
      });
    } else {
      return data.sort((a,b) => {
        let x = Number(a.substring(a.indexOf('-') + 1));let y = Number(b.substring(b.indexOf('-') + 1));
        return ((x < y) ? -1 : ((x > y) ? 1:0));
      });
    }

  }

Deploy the Application


Please find the following command to build and package the solution



gulp clean & gulp bundle --ship & gulp package-solution --ship

Browser the SPO app catalog of your tenant and deploy the application, it is the .sppkg file and it is found in sharepoint/solution folder, after deploying the package you can find this application in the Sharepoint pages across the tenant.


Technology Blog - Python - Graph API and SharePoint 

© 2023 by T-MARKET. Proudly created with Wix.com

  • Facebook - Black Circle
  • Twitter - Black Circle
  • Google+ - Black Circle
bottom of page