top of page
  • Writer: Sumit Dey
    Sumit Dey
  • 4 days ago
  • 4 min read

An MCP server is a service that exposes structured data or functionality (like a database, API, or knowledge base) to be consumed by an MCP client and then used by an LLM.


Here’s the basic flow:

  1. MCP Server provides access to data or tools (for example, a CRM database, internal documentation, API, SharePoint, etc.).It exposes this through MCP-compatible APIs or endpoints.

  2. MCP Client connects to one or more MCP servers. It sends structured queries or requests (like “get weather info” or “fetch math calculation”) to the MCP server.

  3. LLM (Language Model) The client then takes the data returned from the server and passes it to the LLM, which uses that information to generate natural-language output or take further actions.


User Case

Step 1: The user sends a prompt → The chatbot receives it.

Step 2: According to the prompt MCP client decides which MCP server to call(Weather or Custom Math server).

Step 3: The MCP client interacts with one or more MCP servers to retrieve or perform actions, and then uses a Large Language Model (LLM) to generate a reasoning-based output based on the server’s response.


Step 4: Response sent back to the client.


ree


Custom Math MCP server


We have built a custom Math MCP server that supports addition, subtraction, multiplication, and division operations. The MCP server communicates using the stdio transport.



from mcp.server.fastmcp import FastMCP

mcp=FastMCP("Math")

@mcp.tool()
def add(a:int,b:int)->int:
    """_summary_
    Add to numbers
    """
    return a+b

@mcp.tool()
def subtract(a:int,b:int)-> int:
    """Subtract two numbers"""
    return a-b

@mcp.tool()
def multiple(a:int,b:int)-> int:
    """Multiply two numbers"""
    return a*b

@mcp.tool()
def division(a:int,b:int)-> int:
    """Division two numbers"""
    return a/b


#Use standard input/output (stdin and stdout) to receive and respond to tool function calls.
if __name__=="__main__":
    mcp.run(transport="stdio")

Weather MCP Server


A Weather MCP server is a custom MCP server designed to handle weather data requests. It provides tools for retrieving current weather conditions and forecasts for a given state code (e.g., CA, NJ, etc.), using the streamable-http transport protocol to exchange messages with the MCP client.


from mcp.server.fastmcp import FastMCP
from typing import Any
import httpx
# Initialize FastMCP server
mcp = FastMCP("weather")

# Constants
WEATHER_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"


async def make_nws_request(url: str) -> dict[str, Any] | None:
    """Make a request to the NWS API with proper error handling."""
    headers = {
        "User-Agent": USER_AGENT,
        "Accept": "application/geo+json"
    }
    async with httpx.AsyncClient() as client:
        try:
            response = await client.get(url, headers=headers, timeout=30.0)
            response.raise_for_status()
            return response.json()
        except Exception:
            return None
        
def format_alert(feature: dict) -> str:
    """Format an alert feature into a readable string."""
    props = feature["properties"]
    return f"""
        Event: {props.get('event', 'Unknown')}
        Area: {props.get('areaDesc', 'Unknown')}
        Severity: {props.get('severity', 'Unknown')}
        Description: {props.get('description', 'No description available')}
        Instructions: {props.get('instruction', 'No specific instructions provided')}
        """

@mcp.tool()
async def get_alerts(state: str) -> str:
    """Get weather alerts for a US state.

    Args:
        state: Two-letter US state code (e.g. CA, NY)
    """
    url = f"{WEATHER_API_BASE}/alerts/active/area/{state}"
    data = await make_nws_request(url)

    if not data or "features" not in data:
        return "Unable to fetch alerts or no alerts found."

    if not data["features"]:
        return "No active alerts for this state."

    alerts = [format_alert(feature) for feature in data["features"]]
    return "\n---\n".join(alerts)

if __name__=="__main__":
    mcp.run(transport="streamable-http")

MCP Client


An MCP client (Model Context Protocol client) is the component that connects to one or more MCP servers (like weather and custom math MCP servers) to request specialized data, tools, or computations, and then uses the results (often through an LLM, like llama, ChatGPT, etc.) to generate contextually aware outputs.


from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_groq import ChatGroq
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
from fastapi import FastAPI, Query
import uvicorn
from pydantic import BaseModel

load_dotenv()

app = FastAPI(title='MCP AI Agent')

class ChatRequest(BaseModel):
    user_input: str

@app.post("/chat")
async def main(request: ChatRequest):
    client=MultiServerMCPClient(
        {
            "math":{
                "command":"python",
                "args":["mathserver.py"], ## Ensure correct absolute path
                "transport":"stdio",
            
            },
            "weather": {
                "url": "http://localhost:8000/mcp",  # Ensure server is running here
                "transport": "streamable_http",
            }

        }
    )

    import os
    os.environ["GROQ_API_KEY"]=os.getenv("GROQ_API_KEY") # Groq and Llama


    tools=await client.get_tools()
    model=ChatGroq(model="llama-3.3-70b-versatile")  # Groq and Llama

    agent=create_react_agent(
        model,tools
    )

    result = await agent.ainvoke({"messages": [{"role": "user", "content": request.user_input}]})
    return result

uvicorn.run(app, host='127.0.0.1', port=8002)

Now time to build the UI with streamlit


import streamlit as st
import requests

# Streamlit App Configuration
st.set_page_config(page_title="MCP MultiAgent UI", layout="centered")

# Define API endpoint
API_URL = "http://127.0.0.1:8002/chat"

# Streamlit UI Elements
st.title("MCP Chatbot Agent")
st.write("Interact with the MCP using this interface.")


# Input box for user messages
user_input = st.text_area("Enter your prompt:", height=150, placeholder="Please type your prompt here...")


# Button to send the query
if st.button("Submit"):
    if user_input.strip():
        try:

            with st.spinner("wait...", show_time=True):
                # Send the input to the FastAPI backend
                payload = {"user_input": user_input}
                response = requests.post(API_URL, json=payload)

            # Display the response
            if response.status_code == 200:
                response_data = response.json()
                if "error" in response_data:
                    st.error(response_data["error"])
                else:
                    ai_responses = [
                        message.get("content", "")
                        for message in response_data.get("messages", [])
                        if message.get("type") == "ai"
                    ]

                    if ai_responses:
                        st.subheader("Agent Response:")
                        for i, response_text in enumerate(ai_responses, 1):
                            st.markdown(f"{response_text}")
                    else:
                        st.warning("No AI response found in the agent output.")
            else:
                st.error(f"Request failed with status code {response.status_code}.")
        except Exception as e:
            st.error(f"An error occurred: {e}")
    else:
        st.warning("Please enter a message before clicking 'Send Query'.")

Output of the agent when using the Math MCP server


ree

Output of the agent when using the weather MCP server

ree

Conclusion

By separating logic and computation from the client, the MCP server enables modularity, scalability, and reusability across different applications. Whether it’s a math server performing calculations or a weather server providing forecasts, MCP servers enhance the capability of AI-driven systems by integrating external tools and data sources seamlessly.

 
 
 

This app is a comprehensive research and analysis platform designed for professionals and academics. It enables users to collect, organize, and analyze data efficiently. With built-in tools for literature review management, data visualization, and AI-assisted insights, This agent helps researchers make evidence-based decisions faster.


Key Features:


  • AI-powered data summarization and trend detection

  • Integration with Tavily search and OpenAI Model.

  • Real-time market data aggregation

  • Predictive modeling for sales and growth trends

  • Write executive summary after research and analysis.


Technical Overview

  • LangGraph — for orchestrating multi-agent workflows and reasoning chains

  • Python / FastAPI — backend service for agent coordination

  • OpenAI — for LLM-based analysis and report generation

  • Streamlit — for UI and interactive visualization

  • LangChain Tools — for data ingestion and retrieval-augmented generation (RAG)


Architecture Summary:

The app uses a LangGraph-based agent graph, where:

  • A Supervisor Agent assigning tasks to other agents.

  • A Research Agent gathers the research content.

1. Key facts and background 2. Current trends or developments 3. Important statistics or data points 4. Notable examples or case studies

  • An Analysis Agent performs comparative and statistical reasoning. 1. Key insights and patterns

    2. Strategic implications

    3. Risks and opportunities

    4. Recommendations

  • A Writer Agent generates final insights, executive summary and recommendations. 1. Executive Summary

    2. Key Findings

    3. Analysis & Insights

    4. Recommendations

    5. Conclusion


ree

Python code details # State Definition

class SupervisorState(MessagesState):
    """State for the multi-agent system"""
    next_agent: str = ""
    research_data: str = ""
    analysis: str = ""
    final_report: str = ""
    task_complete: bool = False
    current_task: str = ""

# Define tools

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    # Using Tavily for web search
    search = TavilySearchResults(max_results=3)
    results = search.invoke(query)
    return str(results)

@tool
def write_summary(content: str) -> str:
    """Write a summary of the provided content."""
    # Simple summary generation
    summary = f"Summary of findings:\n\n{content[:600]}..."
    return summary

# Define tools

@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    # Using Tavily for web search
    search = TavilySearchResults(max_results=3)
    results = search.invoke(query)
    return str(results)

@tool
def write_summary(content: str) -> str:
    """Write a summary of the provided content."""
    # Simple summary generation
    summary = f"Summary of findings:\n\n{content[:600]}..."
    return summary

# Call Open AI LLM

llm = ChatOpenAI(model="gpt-4o-mini")

# Define tools

def supervisor_agent(state: SupervisorState) -> Dict:
    """Supervisor decides next agent using OpenAI LLM"""
	.....
     .....
         # Determine next agent
    if "done" in decision_text or has_report:
        next_agent = "end"
        supervisor_msg = f"**Supervisor:** All tasks complete! Great work team."
    elif "researcher" in decision_text or not has_research:
        next_agent = "researcher"
        supervisor_msg = f"**Supervisor:** Let's start with research. Assigning to Researcher..."
    elif "analyst" in decision_text or (has_research and not has_analysis):
        next_agent = "analyst"
        supervisor_msg = f"**Supervisor:** Research done. Time for analysis. Assigning to Analyst..."
    elif "writer" in decision_text or (has_analysis and not has_report):
        next_agent = "writer"
        supervisor_msg = f"**Supervisor:** Analysis complete. Let's create the report. Assigning to Writer..."
    else:
        next_agent = "end"
        supervisor_msg = f"**Supervisor:** Task seems complete."
    return {
        "messages": [AIMessage(content=supervisor_msg)],
        "next_agent": next_agent,
        "current_task": task
    }

# Agent 1: Researcher (using Open AI)

def researcher_agent(state: SupervisorState) -> Dict:
    """Researcher uses Open AI to gather information"""
    task = state.get("current_task", "research topic")
    
    # Create research prompt
    research_prompt = f"""As a research specialist, provide comprehensive information about: {task}
    # Create agent message
    agent_message = f"**Researcher:** I've completed the research on '{task}'.\n\nKey findings:\n{research_data[:600]}..."
    return {
        "messages": [AIMessage(content=agent_message)],
        "research_data": research_data,
        "next_agent": "supervisor"
    }

# Agent 2: Analyst (using OPen AI)

def analyst_agent(state: SupervisorState) -> Dict:
    """Analyst uses Open AI to analyze the research"""
    research_data = state.get("research_data", "")
    task = state.get("current_task", "")
    
    # Create analysis prompt
    analysis_prompt = f"""As a data analyst, analyze this research data and provide insights"
    # Get analysis from LLM
    analysis_response = llm.invoke([HumanMessage(content=analysis_prompt)])
    analysis = analysis_response.content
    
    # Create agent message
    agent_message = f"**Analyst:** I've completed the analysis.\n\nTop insights:\n{analysis[:600]}..."
    
    return {
        "messages": [AIMessage(content=agent_message)],
        "analysis": analysis,
        "next_agent": "supervisor"
    }

# Agent 3: Writer (using Open AI)

def writer_agent(state: SupervisorState) -> Dict:
    """Writer uses Open AI to create final report"""
    
    research_data = state.get("research_data", "")
    analysis = state.get("analysis", "")
    task = state.get("current_task", "")
    # Create writing prompt
    writing_prompt = f"""As a professional writer, create an executive report based on:

    Task: {task}

    Research Findings:
    {research_data[:800]}

    Analysis:
    {analysis[:800]}

    Create a well-structured report with:
    1. Executive Summary
    2. Key Findings  
    3. Analysis & Insights
    4. Recommendations
    5. Conclusion

    Keep it professional and concise."""
        
    # Get report from LLM
    report_response = llm.invoke([HumanMessage(content=writing_prompt)])
    report = report_response.content
        # Create final formatted report
    final_report = f"""
    **FINAL REPORT**
    {'\n'}
    {'='*50}
    {'\n'}
    Generated: {datetime.now().strftime('%Y-%m-%d %H:%M')}
    {'\n'}
    {'='*50}

    {report}

    {'='*50}
    Report compiled by Multi-Agent AI System powered by Open AI
    """
    
    return {
        #"messages": [AIMessage(content=f"Writer: Report complete! See below for the full document.")],
        "messages": [AIMessage(content=final_report)],
        "final_report": final_report,
        "next_agent": "supervisor",
        "task_complete": True
    }

# Router Function(decide based on the state)

def router(state: SupervisorState) -> Literal["supervisor", "researcher", "analyst", "writer", "__end__"]:
    """Routes to next agent based on state"""
    
    next_agent = state.get("next_agent", "supervisor")
    
    if next_agent == "end" or state.get("task_complete", False):
        return END
        
    if next_agent in ["supervisor", "researcher", "analyst", "writer"]:
        return next_agent
        
    return "supervisor"

# Create LangGraph workflow and compile

workflow = StateGraph(SupervisorState)

# Add nodes
workflow.add_node("supervisor", supervisor_agent)
workflow.add_node("researcher", researcher_agent)
workflow.add_node("analyst", analyst_agent)
workflow.add_node("writer", writer_agent)

# Set entry point
workflow.set_entry_point("supervisor")

# Add routing
for node in ["supervisor", "researcher", "analyst", "writer"]:
    workflow.add_conditional_edges(
        node,
        router,
        {
            "supervisor": "supervisor",
            "researcher": "researcher",
            "analyst": "analyst",
            "writer": "writer",
            END: END
        }
    )

graph=workflow.compile()

# Chat API endpoint that receives user input, processes it through an AI workflow, and returns the generated result.

@app.post("/chat")
def chat(request: ChatRequest):
    result = graph.invoke({"messages": [{"role": "user", "content": request.user_input}]})
    return result

# Starts the FastAPI server when the Python file is run directly

if __name__ == '__main__':
    uvicorn.run(app, host='127.0.0.1', port=8000)

# Now time to build the UI with streamlit

import streamlit as st
import requests

# Streamlit App Configuration
st.set_page_config(page_title="LangGraph MultiAgent UI", layout="centered")

# Define API endpoint
API_URL = "http://127.0.0.1:8000/chat"

# Streamlit UI Elements
st.title("Research and Analysis Chatbot Agent")
st.write("Interact with the LangGraph-based agent using this interface.")

# Finally ready for the build the final UI(textbox, submit button, result, etc.)

# Input box for user messages
user_input = st.text_area("Enter your prompt:", height=150, placeholder="Please type your prompt here...")


# Button to send the query
if st.button("Submit"):
    if user_input.strip():
        try:

            with st.spinner("wait...", show_time=True):
                # Send the input to the FastAPI backend
                payload = {"user_input": user_input}
                response = requests.post(API_URL, json=payload)

            # Display the response
            if response.status_code == 200:
                response_data = response.json()
                if "error" in response_data:
                    st.error(response_data["error"])
                else:
                    ai_responses = [
                        message.get("content", "")
                        for message in response_data.get("messages", [])
                        if message.get("type") == "ai"
                    ]

                    if ai_responses:
                        st.subheader("Agent Response:")
                        #st.markdown(f"**Final Response:** {ai_responses[-1]}")
                        for i, response_text in enumerate(ai_responses, 1):
                            st.markdown(f"{response_text}")
                            #st.markdown(f"**Response {i}:** {response_text}")
                    else:
                        st.warning("No AI response found in the agent output.")
            else:
                st.error(f"Request failed with status code {response.status_code}.")
        except Exception as e:
            st.error(f"An error occurred: {e}")
    else:
        st.warning("Please enter a message before clicking 'Send Query'.")

Finally, it’s time to enter the prompt and get the desired output


Start with the prompt "What will be the future of quantum computing in the year 2026?"


ree

Here is the Response Agent Response:

Supervisor: Let's start with research. Assigning to Researcher...

Researcher: I've completed the research on 'What will be the future of quantum computing in the year 2026?'.

Key findings:

Future of Quantum Computing in 2026

1. Key Facts and Background

Quantum computing leverages quantum mechanics principles to process information in fundamentally different ways than classical computing. Unlike classical bits that represent either a 0 or 1, quantum bits (qubits) can represent both simultaneously due to a phenomenon known as superposition. Quantum entanglement further allows qubits to be interconnected, exponentially increasing computational power for specific types of pr...

Supervisor: Research done. Time for analysis. Assigning to Analyst...

Analyst: I've completed the analysis.

Top insights:

Insights and Analysis of the Research Data on Quantum Computing in 2026

1. Key Insights and Patterns

  • Technological Shift: Quantum computing is set to fundamentally disrupt traditional computing paradigms, with significant advancements anticipated in hardware sophistication, algorithm development, and real-world application capabilities. The integration of quantum computing in vari...

Supervisor: Analysis complete. Let's create the report. Assigning to Writer...

FINAL REPORT

==================================================
Generated: 2025-10-26 13:06
==================================================
# Executive Report: Insights and Analysis of Quantum Computing in 2026

Executive Summary

As quantum computing approaches a transformative threshold by 2026, this report compiles critical insights from the latest research data. The foundational technology is poised to disrupt conventional computing paradigms through advancements in hardware and algorithms, leading to real-world applications across various sectors. Investment trends indicate growing confidence in quantum technologies, with financial projections reaching 24billionandamarketsizeexpectedtogrowto24billionandamarketsizeexpectedtogrowto8 billion at a compound annual growth rate (CAGR) of over 30%. This report provides an analysis of the current state of quantum computing, future trends, and strategic recommendations for stakeholders in the industry.

Key Findings

  1. Technological Shift:

    • Quantum computing is anticipated to revolutionize traditional computing, promising substantial enhancements in computational capabilities and efficiency.

  2. Investment Trends:

    • Global financial investment in quantum technologies is projected to reach $24 billion by 2026, demonstrating increasing confidence from governmental and private sectors.

  3. Market Dynamics:

    • The quantum computing market is expected to grow to approximately $8 billion by 2026, with a CAGR exceeding 30%, indicating broad interest and adoption across industries.

  4. Hardware Advancements:

    • Significant development in qubit technologies—including superconducting qubits, trapped ions, and topological qubits—is being pursued by leading companies to improve system stability and reduce error rates.

Analysis & Insights

The analysis of the current quantum computing landscape reveals several key patterns and trends:

  • Disruption of Existing Paradigms: Quantum computing's ability to perform complex calculations far surpasses classical systems, paving the way for new applications in fields such as cryptography, drug discovery, and optimization problems.

  • Ecosystem Development: The expanding ecosystem encompassing hardware manufacturers, software developers, and research institutions signifies a collective effort to enhance the practical implementation of quantum technologies.

  • Skill Gap and Workforce Development: As the industry grows, the need for skilled professionals in quantum computing is becoming critical. Investments in education and training programs are necessary to cultivate a workforce capable of leveraging these technologies.

Recommendations

  1. Increased Collaboration: Encourage collaboration between academia, industry, and government to accelerate research, share knowledge, and develop best practices in quantum computing.

  2. Focus on Real-World Applications: Identify and prioritize specific applications with immediate potential benefits, such as optimization in logistics, materials science, and pharmaceuticals, to stimulate early adoption.

  3. Investment in Talent Development: Launch initiatives aimed at educating and training the next generation of quantum computing professionals to address the skill gap and support industry growth.

  4. Monitor Regulatory Developments: Stay abreast of regulatory changes that may impact the development and implementation of quantum technologies to ensure compliance and strategic alignment.

Conclusion

The landscape of quantum computing is rapidly evolving, with significant advancements anticipated by 2026. The convergence of technological developments, investment growth, and market dynamics presents considerable opportunities for businesses and researchers alike. By understanding these trends and aligning strategies accordingly, stakeholders can position themselves at the forefront of this groundbreaking field, harnessing its potential to drive innovation and efficiency across various sectors.

===========================================================
Report compiled by Multi-Agent AI System powered by Open AI


 
 
 
  • Writer: Sumit Dey
    Sumit Dey
  • Apr 11, 2022
  • 3 min read

There are many ways to build machine learning models and we have to do many experiments with those models, it is very impotent to save models in different stages of experiments. Today we would discuss more how we can start an experiment and save the model for future reference. Let's create a feature extraction model and save the whole model as a file.


Use TensorFlow Datasets to Download Data


What are TensorFlow Datasets?

  • Load data already in Tensors

  • Practice on well-established datasets

  • Experiment with different data loading techniques.

  • Experiment with new TensorFlow features quickly (such as mixed precision training)

Why not use TensorFlow Datasets?

  • The datasets are static (they don't change as your real-world datasets would)

  • Might not be suited for your particular problem (but great for experimenting)

To find all of the available datasets in TensorFlow Datasets, you can use the list_builders() method. It looks like the dataset we're after is available (note there are plenty more available but we're on Food101). To get access to the Food101 dataset from the TFDS, we can use the tfds.load() method. In particular, we'll have to pass it a few parameters to let it know what we're after:

  • name (str) : the target dataset (e.g. "food101")

  • split (list, optional) : what splits of the dataset we're after (e.g. ["train", "validation"])

    • the split parameter is quite tricky. See the documentation for more.

  • shuffle_files (bool) : whether or not to shuffle the files on download, defaults to False

  • as_supervised (bool) : True to download data samples in tuple format ((data, label)) or False for dictionary format

  • with_info (bool) : True to download dataset metadata (labels, number of samples, etc.)


# Get TensorFlow Datasets
import tensorflow_datasets as tfds
# Load in the data (takes about 5-6 minutes in Google Colab)
(train_data, test_data), ds_info = tfds.load(name="food101", # target dataset to get from TFDS
                                             split=["train", "validation"], # what splits of data should we get? note: not all datasets have train, valid, test
                                             shuffle_files=True, # shuffle files on download?
                                             as_supervised=True, # download data in tuple format (sample, label), e.g. (image, label)
                                             with_info=True) # include dataset metadata? if so, tfds.load() returns tuple (data, ds_info)
ree

After a few minutes of downloading, we've now got access to the entire Food101 dataset (in tensor format) ready for modeling. Now let's get a little information from our dataset, starting with the class names. Getting class names from a TensorFlow Datasets dataset requires downloading the "dataset_info" variable (by using the as_supervised=True parameter in the tfds.load() method, note: this will only work for supervised datasets in TFDS). We can access the class names of a particular dataset using the dataset_info.features attribute and accessing the names attribute of the "label" key.

# Get class names
class_names = ds_info.features["label"].names
class_names[:10]
ree

Now we are creating the model

import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras.layers.experimental import preprocessing

# Create base model
input_shape = (224, 224, 3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable = False # freeze base model layers

# Create Functional model 
inputs = layers.Input(shape=input_shape, name="input_layer", dtype=tf.float16)
# Note: EfficientNetBX models have rescaling built-in but if your model didn't you could have a layer like below
# x = preprocessing.Rescaling(1./255)(x)
x = base_model(inputs, training=False) # set base_model to inference mode only
x = layers.GlobalAveragePooling2D(name="pooling_layer")(x)
x = layers.Dense(len(class_names))(x) # want one output neuron per class 
# Separate activation of output layer so we can output float32 activations
outputs = layers.Activation("softmax", dtype=tf.float32, name="softmax_float32")(x) 
model = tf.keras.Model(inputs, outputs)

# Compile the model
model.compile(loss="sparse_categorical_crossentropy", # Use sparse_categorical_crossentropy when labels are *not* one-hot
              optimizer=tf.keras.optimizers.Adam(),
              metrics=["accuracy"])

Get the model summary


# Get the model summary
model.summary()
ree


Save the whole model to file

We can also save the whole model using the save() method. Since our model is quite large, you might want to save it to Google Drive (if you're using Google Colab) so you can load it in for use later.


## Saving model to Google Drive
# Create save path to drive 
save_dir = "drive/MyDrive/tensorflow_blog/food_vision/07_efficientnetb0_feature_extract_model_mixed_precision/"
# os.makedirs(save_dir) # Make directory if it doesn't exist

# Save model
model.save(save_dir)
ree

We can also save it directly to our Google Colab instance.


# Save model locally (if you're using Google Colab, your saved model will Colab instance terminates)
save_dir = "07_efficientnetb0_feature_extract_model_mixed_precision"
model.save(save_dir)
ree

And again, we can check whether or not our model is saved correctly by loading it.

# Load model previously saved above
loaded_saved_model = tf.keras.models.load_model(save_dir)

Get the model summary

# Get the model summary
loaded_saved_model.summary()
ree

Both the models seem the same.

 
 
 

Technology Blog - Python - Graph API and SharePoint 

© 2023 by T-MARKET. Proudly created with Wix.com

  • Facebook - Black Circle
  • Twitter - Black Circle
  • Google+ - Black Circle
bottom of page