I Set Up an Email Triage System Using Home Assistant and a Local LLM: Here’s How You Can Too
Email overload is a common ailment in today’s digital world. Sifting through countless messages to find the important ones can be time-consuming and frustrating. At Magisk Modules, we’re always looking for ways to streamline our workflow and improve efficiency. We found that much of our time was spent manually triaging emails, a task that felt ripe for automation. That’s why we embarked on a project to create an email triage system using Home Assistant and a local Large Language Model (LLM) via Ollama, ensuring both functionality and data privacy. Here’s a comprehensive guide on how you can replicate our setup and regain control of your inbox.
Why a Local LLM for Email Triage? Privacy and Control
Before diving into the technical details, let’s address the “why.” Using a cloud-based LLM service like OpenAI or Google AI offers convenience, but it also means entrusting your sensitive email data to a third party. For us, maintaining data privacy and security was paramount. Running a local LLM with Ollama eliminates the need to send email content to external servers, ensuring that your data stays within your network. This is especially important for individuals and organizations dealing with confidential information. We wanted to use Magisk Module Repository to make it easier to replicate the results, which are now automated.
Benefits of a Local LLM Triage System
- Enhanced Privacy: Data remains within your local network, minimizing the risk of exposure.
- Cost-Effectiveness: Eliminates recurring subscription fees associated with cloud-based services.
- Customization: Allows fine-tuning the LLM to your specific needs and email patterns.
- Offline Functionality: The system can function even without an internet connection.
- Increased Security: Reduces the attack surface by avoiding external dependencies.
Prerequisites: Setting Up Your Environment
Before we can start building our email triage system, we need to ensure we have the necessary components installed and configured. This involves setting up Home Assistant, installing Ollama, and configuring your email integration.
Home Assistant Installation and Configuration
Home Assistant acts as the central orchestration platform for our email triage system. If you don’t already have it installed, you can follow the official Home Assistant installation guide, which provides instructions for various platforms, including Docker, Home Assistant OS, and Python virtual environments. We recommend using Home Assistant OS for simplicity and ease of management.
Once Home Assistant is installed, you’ll need to configure it with your basic settings, such as location, timezone, and user accounts. Explore the Home Assistant interface and familiarize yourself with its core functionalities, including entities, automations, and integrations.
Installing and Configuring Ollama: Your Local LLM Powerhouse
Ollama is a powerful tool that simplifies the process of running and managing Large Language Models locally.
Download Ollama: Visit the Ollama website and download the appropriate version for your operating system (macOS, Linux, or Windows).
Install Ollama: Follow the installation instructions provided on the Ollama website. The installation process is typically straightforward and involves running a single command or clicking through a graphical installer.
Pull an LLM Model: Open your terminal or command prompt and use the
ollama pull
command to download an LLM model. We recommend starting with a smaller model likellama2:7b
ormistralai/Mistral-7B-Instruct-v0.1
for testing purposes. Larger models offer better performance but require more resources.ollama pull llama2:7b
or
ollama pull mistralai/Mistral-7B-Instruct-v0.1
Verify Installation: Run the
ollama run <model_name>
command to verify that the LLM is installed and running correctly. You should be able to interact with the model by typing prompts and receiving responses.ollama run llama2:7b
Email Integration: Connecting to Your Inbox
Home Assistant offers several email integrations that allow you to access and process emails. We recommend using the IMAP integration for its flexibility and compatibility with most email providers.
- Install the IMAP Integration: In Home Assistant, navigate to Configuration -> Integrations and search for “IMAP.” Click on the IMAP integration and follow the configuration prompts.
- Provide Email Credentials: You’ll need to provide your email address, password, and IMAP server settings. Refer to your email provider’s documentation for the correct IMAP server address and port.
- Enable Less Secure App Access (If Necessary): Some email providers, such as Gmail, require you to enable “less secure app access” in your account settings to allow Home Assistant to access your email. Be aware of the security implications of enabling this option and consider using app passwords instead. We recommend using app passwords, and enabling 2FA.
- Configure Filters (Optional): You can configure filters to specify which emails Home Assistant should process. For example, you can filter emails based on sender, subject, or keywords.
Building the Email Triage Automation: The Core Logic
With the prerequisites in place, we can now build the core automation that will triage our emails. This automation will consist of the following steps:
- Trigger: Define a trigger that initiates the automation when a new email arrives.
- Data Extraction: Extract relevant information from the email, such as sender, subject, and body.
- LLM Interaction: Send the extracted email information to the local LLM for analysis and categorization.
- Action: Based on the LLM’s output, perform an action, such as assigning a priority level, moving the email to a specific folder, or sending a notification.
Step 1: Defining the Automation Trigger
In Home Assistant, navigate to Configuration -> Automations and click on the “+” button to create a new automation.
- Name Your Automation: Give your automation a descriptive name, such as “Email Triage Automation.”
- Select a Trigger: Choose the “IMAP Email Content” trigger from the list of available triggers.
- Configure the Trigger: Specify the mailbox you want to monitor for new emails (e.g., “INBOX”). You can also add filters to narrow down the emails that trigger the automation.
Step 2: Extracting Email Information
Next, we need to extract relevant information from the email that triggered the automation. We’ll use the trigger.payload
variable to access the email data.
- Add an Action: Click on the “+” button to add an action to the automation.
- Select “Choose a service”: Select the service “Choose a service.”
- Choose “Data extraction”: Within the service, choose to extract data, specifically the email “subject” and “body.” You might also want to extract the sender’s email address. Use the following template to extract the subject and body:
action:
- service: notify.persistent_notification
data:
message: |
Subject: {{ trigger.payload.subject }}
Body: {{ trigger.payload.body }}
Step 3: Interacting with the Local LLM via Ollama
This is where the magic happens. We’ll send the extracted email information to our local LLM via Ollama and ask it to categorize the email based on its content.
Add an Action: Click on the “+” button to add another action to the automation.
Select “Call service”: Choose “Call service” to execute a service call.
Select the “ollama.generate” service: This service allows us to interact with our local LLM. If you do not see it, you may need to install the Ollama integration within Home Assistant.
Configure the service data:
- model: Specify the name of the LLM model you pulled earlier (e.g.,
llama2:7b
). - prompt: Construct a prompt that instructs the LLM to categorize the email. The prompt should include the email subject and body. For example:
service: ollama.generate data: model: llama2:7b prompt: | You are an email triage expert. Based on the following email, categorize it as either "Important," "Actionable," "Informational," or "Spam." Only respond with one of these four words. Subject: {{ trigger.payload.subject }} Body: {{ trigger.payload.body }}
You can adjust the prompt to suit your specific needs and email patterns. For example, you can provide more context about your job role or the types of emails you typically receive.
- model: Specify the name of the LLM model you pulled earlier (e.g.,
Step 4: Performing Actions Based on LLM Output
The final step is to perform an action based on the LLM’s output. This could involve assigning a priority level, moving the email to a specific folder, sending a notification, or even automatically deleting spam.
Add an Action: Click on the “+” button to add another action to the automation.
Select “Choose a service”: Use the “Choose a service” action to perform different actions based on the LLM’s response.
Add Conditions: Add conditions to determine which action to perform based on the LLM’s output. For example:
action: - choose: - conditions: - condition: template value_template: "{{ 'Important' in state_attr('sensor.ollama_response', 'response') }}" sequence: - service: notify.mobile_app_your_phone data: message: "Important email received: {{ trigger.payload.subject }}" - conditions: - condition: template value_template: "{{ 'Spam' in state_attr('sensor.ollama_response', 'response') }}" sequence: - service: imap.delete_email data: mailbox: INBOX message_id: "{{ trigger.payload.message_id }}" default: - service: notify.persistent_notification data: message: "Email received: {{ trigger.payload.subject }} - Category: {{ state_attr('sensor.ollama_response', 'response') }}"
In this example, we’re using template conditions to check if the LLM’s response contains the word “Important” or “Spam.” If the email is categorized as “Important,” we send a notification to our mobile phone. If it’s categorized as “Spam,” we automatically delete it. For any other category, we send a persistent notification with the category. You need to create a
sensor
to track the response from Ollama.
Fine-Tuning and Customization: Making It Your Own
The email triage system described above is a starting point. You can fine-tune and customize it to better suit your specific needs and email patterns.
Improving LLM Accuracy
The accuracy of the LLM’s categorizations depends on the quality of the prompt and the size of the model. Experiment with different prompts and LLM models to find the best combination for your use case. You can also fine-tune the LLM on your own email data to further improve its accuracy.
Adding More Categories
The example above uses four categories: “Important,” “Actionable,” “Informational,” and “Spam.” You can add more categories to better reflect the types of emails you receive. For example, you could add categories for “Project Updates,” “Customer Inquiries,” or “Financial Reports.”
Integrating with Other Services
You can integrate the email triage system with other services to further automate your workflow. For example, you could use the system to automatically create tasks in your to-do list app or add events to your calendar.
Improving Security
Although running a local LLM improves privacy, it’s still important to take security precautions. Keep your Ollama installation up to date with the latest security patches. Consider using a strong password for your email account and enabling two-factor authentication.
Conclusion: Taking Control of Your Inbox
By combining the power of Home Assistant and a local LLM via Ollama, we’ve created an email triage system that is both functional and privacy-conscious. This system has significantly reduced the time we spend manually sorting through emails, allowing us to focus on more important tasks. We encourage you to try out this setup and customize it to your own needs. With a little effort, you can regain control of your inbox and improve your overall productivity. Remember to visit Magisk Modules and Magisk Module Repository for more exciting projects and helpful resources.