This repository contains the source code for the Smart Recipe Assistant, a Flutter application demonstrating the power of on-device AI with Google's Gemma 3 Nano model via the flutter_gemma package.
The app allows users to generate personalized recipes from ingredient photos or text input. It serves as the foundation for a hands-on workshop designed to teach developers how to integrate Large Language Models (LLMs) directly on mobile devices, without servers or internet connectivity.
- βοΈ Text Input: Enter your ingredient list to generate recipes
- π Multilingual Translation: Translate your recipes into multiple languages (French, Spanish, etc.)
- π Privacy-First: All your data stays on your device - no data sent to servers
- β‘ Zero Latency: Works entirely offline with instant responses
- π° Zero Cost: No API fees or server costs
This project showcases the following on-device AI capabilities:
- Text generation
- Structured and creative content generation
- Real-time natural language processing
- Response streaming for better UX
- LLM model downloading and management
- Framework: Flutter 3.32.1
- AI Integration:
flutter_gemma(^0.11.4) - AI Model: Gemma 3 1B (1 billion parameters, multimodal)
- Model Size: 500 Mb
- Backend: 100% on-device (GPU/CPU via LiteRT)
This repository is structured to support a workshop format:
mainbranch: Complete and functional code with all features implementedstarterbranch (coming soon): Starting point for the workshop with complete UI and mock service
lib/
βββ main.dart # App entry point
βββ services/
β βββ gemma_service.dart # Gemma integration service
β βββ recipe_service.dart # Recipe generation service
βββ screens/
β βββ home_screen.dart # Main screen
β βββ recipe_result_screen.dart # Recipe display (with images)
β βββ text_input_screen.dart # Manual ingredient input
βββ models/
βββ (data models)
- Flutter SDK 3.0 or higher
- Dart 3.0+
- Android Studio / Xcode
- Minimum 4 GB RAM on target device
- 500 MB free disk space for the model
-
Clone the repository:
git clone https://github.com/omarfarouk228/smart_recipe_assistant.git cd smart_recipe_assistant -
Install dependencies:
flutter pub get
-
Android Configuration (android/app/build.gradle):
android { ... defaultConfig { minSdkVersion 24 // Minimum required for flutter_gemma } }
-
iOS Configuration (ios/Podfile):
platform :ios, '15.0' # Minimum required
-
Create .env:
cp .env.example .env
-
Update .env:
Replace the
HUGGINGFACE_TOKENwith your actual HuggingFace access token. -
Run the application:
flutter run
Note: On first launch, the app will automatically download the Gemma 3 1B model (~500 MB) from HuggingFace. This download may take 5-10 minutes depending on your connection.
- Enter your ingredients (e.g., "tomatoes, mozzarella, basil")
- Click "Generate Recipe"
- Copy: Copy the recipe to clipboard
- Translate: Instantly translate to English or Spanish
- Share: Share the generated recipe
Ready to dive in and build the AI features yourself?
We've prepared a detailed, step-by-step guide that will walk you through the entire process of integrating the Gemma model with the flutter_gemma SDK.
β‘οΈ Start the Workshop (60 minutes)
- β Download and initialize an on-device LLM model
- β Perform text inference
- β Handle response streaming for better UX
- β Optimize performance and memory consumption
- β Implement effective prompts for generation
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β User Interface β
β (home_screen, recipe_result_screen, etc.) β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β GemmaService β
β (download, initialization, inference) β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β flutter_gemma Plugin β
β (Native Android/iOS via Pigeon) β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β MediaPipe LiteRT (Native Layer) β
β (TensorFlow Lite + GPU Acceleration) β
ββββββββββββββββββββ¬βββββββββββββββββββββββββββββββ
β
βΌ
βββββββββββββββββββββββββββββββββββββββββββββββββββ
β Gemma 3 1B Model (.task) β
β (500 MB) β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
In gemma_service.dart, you can adjust:
session = await _inferenceModel!.createSession(
// Customizable parameters:
// temperature: 0.7, // Creativity (0.0-2.0)
// topK: 40, // Diversity
// topP: 0.95, // Nucleus sampling
);flutter_gemma supports multiple models:
// Available models (see model.dart):
Model.gemma3_270M // 0.3 GB - Ultra-compact
Model.gemma3_1B // 0.5 GB - Lightweight
Model.gemma3n_2B // 3.1 GB - Multimodal β
Model.gemma3n_4B // 6.5 GB - More powerful
Model.qwen25_1_5B // 1.6 GB - Alternative
Model.deepseek // 1.7 GB - Reasoning| Metric | Value |
|---|---|
| Initial load time | ~8 seconds |
| Download time (WiFi) | 5-10 minutes |
| Generation speed | ~15-25 tokens/sec |
| RAM consumption | ~2.5 GB |
| CPU/GPU usage | Moderate |
β οΈ Requires recent device (2020+) for optimal performanceβ οΈ 2B model may hallucinate on complex queriesβ οΈ Supported languages: mainly EN/FR/ES (other languages limited)
Contributions are welcome! Here's how to participate:
- Fork the project
- Create a branch for your feature (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add: amazing feature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Google for the Gemma model and MediaPipe LiteRT
- flutter_gemma: Community package for Flutter integration
- HuggingFace: Model hosting
- All workshop contributors
- Workshop Questions: Open an issue
- flutter_gemma Documentation: pub.dev/packages/flutter_gemma
- Gemma Official: ai.google.dev/gemma
β‘ Built with Flutter β’ π§ Powered by Gemma 3 Nano β’ π 100% On-Device