Skip to content

omarfarouk228/smart_recipe_assistant

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Smart Recipe Assistant - On-Device AI with Flutter & Gemma

This repository contains the source code for the Smart Recipe Assistant, a Flutter application demonstrating the power of on-device AI with Google's Gemma 3 Nano model via the flutter_gemma package.

The app allows users to generate personalized recipes from ingredient photos or text input. It serves as the foundation for a hands-on workshop designed to teach developers how to integrate Large Language Models (LLMs) directly on mobile devices, without servers or internet connectivity.

✨ Features

  • ✍️ Text Input: Enter your ingredient list to generate recipes
  • 🌍 Multilingual Translation: Translate your recipes into multiple languages (French, Spanish, etc.)
  • πŸ”’ Privacy-First: All your data stays on your device - no data sent to servers
  • ⚑ Zero Latency: Works entirely offline with instant responses
  • πŸ’° Zero Cost: No API fees or server costs

🎯 Demonstrated Use Cases

This project showcases the following on-device AI capabilities:

  • Text generation
  • Structured and creative content generation
  • Real-time natural language processing
  • Response streaming for better UX
  • LLM model downloading and management

πŸ› οΈ Technology Stack

  • Framework: Flutter 3.32.1
  • AI Integration: flutter_gemma (^0.11.4)
  • AI Model: Gemma 3 1B (1 billion parameters, multimodal)
  • Model Size: 500 Mb
  • Backend: 100% on-device (GPU/CPU via LiteRT)

πŸ“ Repository Structure

This repository is structured to support a workshop format:

  • main branch: Complete and functional code with all features implemented
  • starter branch (coming soon): Starting point for the workshop with complete UI and mock service
lib/
β”œβ”€β”€ main.dart                      # App entry point
β”œβ”€β”€ services/
β”‚   └── gemma_service.dart         # Gemma integration service
β”‚   └── recipe_service.dart        # Recipe generation service
β”œβ”€β”€ screens/
β”‚   β”œβ”€β”€ home_screen.dart           # Main screen
β”‚   β”œβ”€β”€ recipe_result_screen.dart  # Recipe display (with images)
β”‚   └── text_input_screen.dart     # Manual ingredient input
└── models/
    └── (data models)

πŸš€ Quick Start

Prerequisites

  • Flutter SDK 3.0 or higher
  • Dart 3.0+
  • Android Studio / Xcode
  • Minimum 4 GB RAM on target device
  • 500 MB free disk space for the model

Installation

  1. Clone the repository:

    git clone https://github.com/omarfarouk228/smart_recipe_assistant.git
    cd smart_recipe_assistant
  2. Install dependencies:

    flutter pub get
  3. Android Configuration (android/app/build.gradle):

    android {
        ...
        defaultConfig {
            minSdkVersion 24  // Minimum required for flutter_gemma
        }
    }
  4. iOS Configuration (ios/Podfile):

    platform :ios, '15.0'  # Minimum required
  5. Create .env:

    cp .env.example .env
  6. Update .env:

    Replace the HUGGINGFACE_TOKEN with your actual HuggingFace access token.

  7. Run the application:

    flutter run

Note: On first launch, the app will automatically download the Gemma 3 1B model (~500 MB) from HuggingFace. This download may take 5-10 minutes depending on your connection.

πŸ“± Usage

1. Manual Input

  • Enter your ingredients (e.g., "tomatoes, mozzarella, basil")
  • Click "Generate Recipe"

2. Advanced Features

  • Copy: Copy the recipe to clipboard
  • Translate: Instantly translate to English or Spanish
  • Share: Share the generated recipe

πŸŽ“ Workshop: Build This App Step-by-Step

Ready to dive in and build the AI features yourself?

We've prepared a detailed, step-by-step guide that will walk you through the entire process of integrating the Gemma model with the flutter_gemma SDK.

➑️ Start the Workshop (60 minutes)

What You'll Learn:

  1. βœ… Download and initialize an on-device LLM model
  2. βœ… Perform text inference
  3. βœ… Handle response streaming for better UX
  4. βœ… Optimize performance and memory consumption
  5. βœ… Implement effective prompts for generation

πŸ—οΈ Architecture

Data Flow

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚              User Interface                     β”‚
β”‚  (home_screen, recipe_result_screen, etc.)      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚           GemmaService                          β”‚
β”‚  (download, initialization, inference)          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚         flutter_gemma Plugin                    β”‚
β”‚     (Native Android/iOS via Pigeon)             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      MediaPipe LiteRT (Native Layer)            β”‚
β”‚    (TensorFlow Lite + GPU Acceleration)         β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                   β”‚
                   β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚        Gemma 3 1B Model (.task)               β”‚
β”‚         (500 MB)                β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”§ Advanced Configuration

Customize Model Parameters

In gemma_service.dart, you can adjust:

session = await _inferenceModel!.createSession(
  // Customizable parameters:
  // temperature: 0.7,  // Creativity (0.0-2.0)
  // topK: 40,          // Diversity
  // topP: 0.95,        // Nucleus sampling
);

Use a Different Model

flutter_gemma supports multiple models:

// Available models (see model.dart):
Model.gemma3_270M      // 0.3 GB - Ultra-compact
Model.gemma3_1B        // 0.5 GB - Lightweight
Model.gemma3n_2B       // 3.1 GB - Multimodal ⭐
Model.gemma3n_4B       // 6.5 GB - More powerful
Model.qwen25_1_5B      // 1.6 GB - Alternative
Model.deepseek         // 1.7 GB - Reasoning

πŸ“Š Performance & Limitations

Benchmarks (Test Device: Pixel 7 Pro)

Metric Value
Initial load time ~8 seconds
Download time (WiFi) 5-10 minutes
Generation speed ~15-25 tokens/sec
RAM consumption ~2.5 GB
CPU/GPU usage Moderate

Known Limitations

  • ⚠️ Requires recent device (2020+) for optimal performance
  • ⚠️ 2B model may hallucinate on complex queries
  • ⚠️ Supported languages: mainly EN/FR/ES (other languages limited)

🀝 Contributing

Contributions are welcome! Here's how to participate:

  1. Fork the project
  2. Create a branch for your feature (git checkout -b feature/AmazingFeature)
  3. Commit your changes (git commit -m 'Add: amazing feature')
  4. Push to the branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

  • Google for the Gemma model and MediaPipe LiteRT
  • flutter_gemma: Community package for Flutter integration
  • HuggingFace: Model hosting
  • All workshop contributors

πŸ“§ Contact & Support


⚑ Built with Flutter β€’ 🧠 Powered by Gemma 3 Nano β€’ πŸ”’ 100% On-Device

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors