DEEP RESEARCH: Voice-Controlled AI Email Assistant MVP Specification

DEEP RESEARCH: Voice-Controlled AI Email Assistant MVP Specification

April 14, 2025

This is what ChatGPT said about our work, from my voice:
In a recent project, I explored using ChatGPT's "Deep Research" capability to efficiently translate a complex, multi-layered idea into a clear, actionable development plan. My goal was ambitious: build a voice-controlled AI email assistant that could seamlessly integrate with Gmail, providing users with hands-free email management, voice-driven summaries, and AI-crafted replies tailored to their personal style.

Using ChatGPT's Deep Research, I provided detailed instructions and key requirements upfront, and the AI returned with a precisely segmented, copy-and-paste-ready specification document tailored for Lovable's no-code prompt engine. This included clear prompts for:

  • Defining the app's core functionality (Gmail OAuth, voice commands, email summaries, and AI drafting).
  • Detailed Supabase backend setup (authentication, database schema, Row-Level Security, and secure Edge Functions).
  • Specific API integrations (Google Speech-to-Text and Text-to-Speech, OpenAI for summarization and reply drafting).
  • Stripe integration to enable subscription-based user tiers (Basic and Premium), user profile management, personalized style guides, and reusable message templates.

This Deep Research approach dramatically simplified the translation from concept to execution. Rather than spending days or weeks manually preparing detailed instructions, the Deep Research output provided clearly structured segments I could effortlessly copy into Lovable, greatly reducing complexity and accelerating development.

Through this experience, I learned firsthand how leveraging ChatGPT’s Deep Research can turn intricate ideas into immediately actionable plans, streamlining the entire project workflow and enabling rapid progression from idea to MVP.

Voice-Controlled AI Email Assistant MVP Specification

Lovable Input (MVP Overview)

Describe the application to Lovable as a voice-controlled AI email assistant that enables hands-free reading and replying to Gmail messages. Emphasize that this is an MVP (Minimum Viable Product) focusing on core functionality. Key points to include in the Lovable prompt description:

  • Gmail Integration: The app connects to the user’s Gmail account via OAuth 2.0 to read emails, draft replies, and send emails on their behalf.
  • Voice Commands: Users can control the app with voice or large on-screen buttons. Supported commands (voice-triggered or tap) should include:
    • "Read Summary" – Summarize the current email and read the summary aloud.
    • "Read Email" – Read the full email body aloud (text-to-speech).
    • "Next" – Go to the next email in the inbox (or filtered list) and automatically prepare to summarize or read it.
    • "Back" – Go to the previous email.
    • "Record Reply" – Start listening for the user’s voice input to compose a reply.
    • "Exit Handsfree" – Exit the hands-free mode (stop listening and voice playback).
    • "Skip this Email" – Skip the current email without marking it read or taking action (move to next email).
    • "Snooze this Email" – Snooze the email for later (temporarily hide it from the queue and resurface it after a set time).
    • "Mark Unread" – Mark the email as unread in Gmail (e.g., if the user wants to leave it as new).
    • "Archive this Email" – Archive the email (remove it from Inbox).
    • Toggle "Important Only" vs "All Messages" – Filter which emails are being read (either only Gmail-marked important emails or the entire inbox). This can be a voice command or a switch in the UI.
  • AI Summaries & Drafts: Use OpenAI GPT-4 to generate a brief summary of each email and to help draft reply messages. The summary gives the user the gist of the email without reading every detail. For replies, the AI can take the user’s spoken input (a brief direction or message) and generate a well-formatted email draft.
  • Text-to-Speech (TTS): Read aloud email summaries, full email content, and AI-generated drafts in a natural voice. (For MVP, use a reliable TTS service like Google Cloud Text-to-Speech or ElevenLabs to produce clear voice audio.)
  • Speech-to-Text (STT): Convert the user’s spoken commands and dictated reply content into text. (For MVP, use a robust STT service such as OpenAI’s Whisper or Google Speech-to-Text to accurately transcribe the user’s voice.)
  • User Authentication: Implement a simple login system (e.g. using Supabase Auth). Users should log in (for example, using email/password or “Sign in with Google”) to access the app. Each user’s Gmail integration and settings will be tied to their account.
  • Hands-Free Mode Flow: Once logged in and connected, the user can enter a hands-free reading mode. The app will fetch the latest emails (according to the filter setting), and await user voice commands. On “Read Summary”, it plays the summary via TTS. The user can then say “Read Email” for full detail or give another command (reply, next, etc.). This loop continues until the user says “Exit Handsfree.” Ensure the app provides voice feedback for confirmation (for example, saying "Archiving email..." when the user says “Archive”).
  • Minimal UI: The interface should be mobile-friendly with just the essential controls visible. Large buttons for the main actions (Read/Next/Back/Reply/etc.) should be present for users who prefer tapping. The screen should display the current email’s sender, subject, and perhaps the summary or snippet, so the user has context at a glance. The design should be clean and not distract from voice interaction.

When giving this description to Lovable’s prompt engine, be clear and concise. For example, you might start with: “Build a mobile-friendly web app that acts as a voice-controlled email assistant. It integrates with Gmail and uses voice commands. Core features: Gmail OAuth login, read aloud email summary or full text, voice-recorded reply with AI draft, and hands-free navigation through emails. The UI should have large buttons for Read, Next, Back, Reply, Exit, and a toggle for Important emails only.” This will prompt Lovable to scaffold the basic UI and flow. Expect follow-up questions from Lovable to clarify details (for instance, it might ask about connecting APIs or data models). You can then proceed to set up the backend and integrations as described below.

Supabase Setup (Backend & Database)

To support the above functionality, set up a Supabase backend for authentication, data storage, and serverless functions. This involves creating a new Supabase project and configuring it for our app:

1. Create a Supabase Project: Sign up or log in to Supabase and create a new project for this app. Note the Project URL and the anon API key (public API key) – these will be used by the Lovable app to interact with the database. Also obtain the service role key (private key) for use in server-side functions if needed (Lovable will handle this securely via Supabase Edge Functions).

2. Configure Authentication: In your Supabase project settings, enable authentication providers for your app. At minimum, enable email/password sign-ups so users can register an account. For a smoother login, you can also enable Google OAuth login (under Authentication → Providers → Google):

  • Set the Google Client ID and Client Secret (from your Google Cloud credentials; see Gmail OAuth setup below) in the Supabase Auth settings.
  • Ensure you add Supabase’s callback URL (e.g. https://<YOUR-PROJECT>.supabase.co/auth/v1/callback) to the authorized redirect URIs in your Google OAuth client configuration.
  • This allows users to sign in to the app with their Google account. (Even with Google sign-in for identity, we will still perform a separate OAuth flow for Gmail API access with the required email scopes.)
  • If not using Google sign-in, the user can create an account with email and password, then connect their Gmail within the app separately.

3. Define Database Schema: Using Supabase (either via the SQL editor or through Lovable’s integration), set up the tables needed to store user data, email metadata (if needed), AI draft content, and usage logs. Lovable can create these tables if you describe them in a prompt after connecting Supabase. The basic schema for MVP:

  • Users – stores user profile info (if you need more than the Supabase Auth default). For example: id (UUID, primary key, matches Supabase Auth user id), email (user’s email), display_name, preferences (JSON or text for settings like voice preferences or draft style). You may not need a separate Users table if using Supabase Auth’s built-in user management, but it can be helpful for additional info.
  • EmailTokens – stores Gmail OAuth tokens for each user. Fields: user_id (references Users or Auth UID), gmail_access_token, gmail_refresh_token, token_expiry (timestamp), and perhaps scope or token_type. This table is sensitive; enable Row Level Security so that each user (identified by user_id) can only access their own record. The app will use these stored tokens to make Gmail API calls server-side. (Alternatively, you could store these in Supabase’s secure storage or vault if available. For MVP, a table with proper security is fine.)
  • Drafts – (Optional) store AI-generated draft emails if you want to keep history. Fields: id, user_id, email_thread_id (or message ID the draft is for), draft_content (text of the draft), created_at. This could be used to save a draft for later editing, but since drafts will also exist in Gmail (when saved), this table is not strictly required. It’s mainly for logging or if you want to track AI suggestions.
  • UsageLogs – (Optional for MVP) log key actions for analytics or debugging. Fields might include id, user_id, action (e.g., "read_email", "send_email"), timestamp, and perhaps details like email_id or success/failure info. This can help in monitoring usage (like number of OpenAI API calls per user) and debugging issues. For MVP, you can keep this simple or skip detailed logging, but it’s good to have a structure for future use.
  • Snoozes – (Optional) if implementing email snooze, you might create a table to keep track of snoozed emails and when they should resurface. Fields: user_id, message_id (Gmail ID of the email), snooze_until (timestamp when the email should reappear). The app/backend would use this to trigger moving the email back to inbox at the right time. (This requires a scheduled job or function – see Edge Functions below. Snooze can also be approximated without a table by using Gmail labels only, but a table allows more control.)

How to create these tables: In Lovable’s chat (with Supabase connected), you can instruct: “Create a table EmailTokens with columns: user_id (uuid references auth.users), gmail_access_token (text), gmail_refresh_token (text), token_expiry (timestamp). Ensure RLS is enabled so users can only access their own tokens.” Do similarly for any other tables. Lovable will generate the SQL and apply it to Supabase. Review the schema in Supabase to confirm.

4. Row-Level Security (RLS): After creating tables, enable RLS on any table that will store user-specific data (EmailTokens, Drafts, Snoozes, etc.). Then create policies to restrict access:

  • For example, on EmailTokens table: allow each authenticated user to select and update their own row (user_id = auth.uid()) and perhaps no deletes (or allow if needed). The Supabase Auth user ID will serve as the foreign key.
  • Lovable can assist if you prompt it to “Enable RLS on EmailTokens and write a policy to allow a user to access only their own record.”
  • This ensures that even if multiple users use the app, they cannot ever read or modify each other’s data via direct queries. (Supabase will enforce this on client-side queries. Our Edge Functions will use service role and must manually ensure to query by user_id to avoid mixing data.)

5. Supabase Edge Functions: Use Edge Functions (serverless functions) in Supabase for any logic that should be secure or done server-side. In this app, we need Edge Functions for:

  • Gmail OAuth Callback/Token Exchange: After the user grants permission on Google’s OAuth page, Google will redirect to our app with a code. Implement an edge function (or a server route in Lovable) to handle this. This function will receive the OAuth authorization code, then use Google’s OAuth2 token endpoint to exchange it for an access token and refresh token. It will then store these tokens in the EmailTokens table for that user, and perhaps redirect the user back to the app UI with a success message. (You will configure the redirect URI in Google to point to this function or a front-end route that calls this function.)
  • Fetch Emails from Gmail: Create a function to fetch the user’s emails via Gmail API. For MVP, you might implement a getNextEmail or listEmails function. This function will use the stored tokens to call Gmail API endpoints (for example, the /messages list and get endpoints). It can accept parameters like label or importantOnly and pageToken or nextMessageId to retrieve the appropriate email. It should return the email data (subject, sender, body) to the client. Because it runs server-side, it can use the refresh token to refresh the Gmail access token if it’s expired (handle the OAuth refresh flow), then update the stored token. Lovable can help write this if you prompt something like: “Create an edge function fetchEmail that gets the next email from Gmail. Use the user’s stored Gmail tokens (in EmailTokens table) to call Gmail API. If the access token is expired, use the refresh token to get a new one (Google OAuth token endpoint) and update the database. The function should return the email’s sender, subject, body, and any needed metadata (like message ID or thread ID).” Provide the Gmail API URL for listing or reading messages (for example, GET https://gmail.googleapis.com/gmail/v1/users/me/messages with appropriate query params). You may need to also enable the Gmail API in Google and provide the function with the client_id/secret for refresh token exchange, or use Google’s libraries.
  • Send Email / Save Draft: Create an edge function sendEmail that sends an email (or saves it as draft) via Gmail API. Input to this function will be the draft content (the email body that the AI/user composed), and possibly the original email’s metadata (like thread ID if it’s a reply, or recipients, subject if needed). The function will call Gmail API to send the message. If implementing “Save as Draft”, the function can call the Gmail drafts API instead to save without sending. Use the user’s access token similarly (refresh if needed). Warning: If the user said "Send without reviewing," ensure a confirmation step occurs before calling this function (the front-end should handle that confirmation). This function itself should simply perform the send action when invoked. Lovable prompt example: “Create an edge function sendEmail that sends an email via Gmail API. It should take parameters for to, subject, and body (in HTML or plain text). Use the user’s Gmail OAuth token from the database. If sendNow flag is false, save as draft instead of sending.” (We might implement immediate sending for MVP and optionally handle drafts if saveDraft is called.)
  • AI Summarization: Create a function summarizeEmail that uses OpenAI’s API. Input: an email’s text (possibly plus context like subject or sender). Output: a concise summary. This function will use the OpenAI API key (stored as a secret) and call the GPT-4 model. We’ll design the prompt in the next section. This function can be called whenever the user requests a summary. (Alternatively, you can call it automatically when a new email is fetched, and store the summary ready to be read, to save time in the voice flow.)
  • AI Draft Generation: Create a function generateDraftReply that uses OpenAI to draft a reply email. Input: the original email text (and maybe summary) plus either the user’s spoken reply intent or a directive. Output: a suggested reply text. It will use GPT-4 via OpenAI API similarly. (This could be combined with the summarization function or separate. Separating concerns is clearer.)
  • Speech-to-Text (optional server-side): If you choose not to use the browser’s built-in STT, you can create a transcribeAudio function. This would accept an audio file (recorded snippet from the user) and use a service like Whisper API to return text. The front-end would need to send the audio blob to this function. Note: This adds complexity (handling file upload through the function). For MVP, it might be acceptable to use the browser’s Web Speech API for real-time transcription, which wouldn’t need a server function. We mention this for completeness; you may skip implementing a server STT function in MVP and rely on client-side voice recognition to keep things simpler.

Connecting Lovable to Supabase: In Lovable, after creating the UI, you should integrate the Supabase project. Usually, Lovable will ask for your Supabase project URL and anon key when you first prompt something that requires a database (like creating a table or enabling auth). Provide those, and Lovable will establish the connection. Also, provide any required secrets for Edge Functions:

  • Store your OpenAI API Key as a secret (Lovable’s Supabase integration may prompt you to add it, or you can add via Supabase dashboard under “Project Settings -> API -> Secrets”). For example, add OPENAI_API_KEY = <your key>.
  • Store the Google OAuth Client ID and Client Secret as secrets too (so that edge functions can use them for token refresh or any direct Google API calls if needed): e.g., GOOGLE_CLIENT_ID and GOOGLE_CLIENT_SECRET.
  • If using a Google Cloud API key for TTS or STT, also store it (e.g., GOOGLE_CLOUD_API_KEY).
  • Lovable’s Edge Functions can access these secrets from environment variables.
  • When you ask Lovable to create an edge function that uses these keys, mention the secret names so it knows to reference them (for instance, it might write code that reads an env var like process.env.OPENAI_API_KEY).

After setting up, your backend will consist of the Supabase database (with tables and auth) and several serverless functions to handle external API calls securely. The Lovable app front-end will interact with these via HTTP calls or via the Supabase client library as appropriate. Always test each function (Supabase provides a way to test functions, and you can call them from the Lovable app code) to ensure they work (e.g., test that you can fetch an email, that summarization returns a string, etc.) before integrating fully into the voice flow.

OpenAI Prompt Engineering (Email Summaries & Draft Replies)

Design concise and structured prompts for the GPT-4 model to ensure high-quality summaries and reply drafts. The prompts should provide clear instructions to the AI, and include only the necessary context to avoid confusion or excessive length. We will use GPT-4 (via OpenAI API) for:

  • Email Summarization: producing a brief summary of an email.
  • Draft Reply Generation: composing a suggested reply based on the original email and the user’s intent.

Summarization Prompt: The goal is to get a short, accurate summary that captures the important points of an email. The summary should be suitable for being read aloud to the user, giving them enough information to understand the email’s content and urgency. Keep it concise (a few sentences). For example:

Prompt:
"Summarize the following email in a concise way for a user who is listening. Focus on the key points and any questions or requests from the sender. Omit unnecessary details.

Email:
<EMAIL_TEXT>

Summary:"

In this prompt structure, the text of the email (subject and body, possibly sender info if relevant) is inserted in place of <EMAIL_TEXT>. The model is asked to output a summary. By explicitly instructing it to be concise and focus on key points, we guide it to produce a brief summary. You can also add length guidance like “in 2-3 sentences” if needed. GPT-4 is usually capable of understanding “concise” without a fixed limit, but being explicit can help consistency. Make sure to remove any email signatures or long headers from <EMAIL_TEXT> to avoid wasting tokens or confusing the model.

Draft Reply Prompt: For generating an email reply, the model needs context of the original email and guidance on the reply’s intent. We incorporate the user’s input (if any). Two scenarios:

  • The user provides a brief spoken response or instructions (e.g., “Tell them I can meet next Tuesday” or “Decline politely because I’m on vacation”). In this case, we use that as guidance.
  • The user simply triggers a reply without specific input, implying they want the AI to draft a generic appropriate response (perhaps they will edit later). In MVP, we assume the user will give at least some hint (via speaking) of what they want to say, even if very brief.

A structured prompt for reply generation could be:

Prompt:
"You are an email assistant helping the user draft a reply.
The user received the following email:

[Original Email]
From: <SENDER_NAME>
Subject: <SUBJECT>
Body:
<EMAIL_BODY>

The user wants to reply with the following intent or message:
<User Intent: "<USER_INPUT>">

Draft a clear, polite email response in the user's style, addressing all relevant points from the original email.
- Start with an appropriate greeting.
- Acknowledge the sender’s points or questions.
- Include the information or answer the user wants to convey.
- Keep the tone <STYLE_TONE> and the length moderate.
- End with a sign-off and the user’s name.

Reply Draft:"

In this template:

  • <EMAIL_BODY> is the content of the original email (you might also include a short summary of it above to focus the model, but GPT-4 can handle the whole body if not too large).
  • <USER_INPUT> is the transcribed text of the user’s spoken instructions for the reply. For example, if the user said, “I’m interested, ask them for the schedule,” that text goes here.
  • <STYLE_TONE> is a variable where you can insert a desired tone/personalization. For MVP, you might say “professional” or “friendly” based on a default or user setting. If you have a user preference stored (say the user prefers casual language), you can insert that, e.g., “Keep the tone casual and friendly” or “Keep the tone formal”.
  • We explicitly list steps: greeting, acknowledging, answering, sign-off. This helps GPT-4 structure the reply properly.

The model will then output a draft email. The app can take this draft and either show it to the user on screen or read it out loud. The user can then decide to send it or not.

User Style Personalization: If possible, incorporate the user’s style or common phrases. For example, maybe allow the user to input their name and preferred sign-off (like “Best regards, Alice”). You can then add to the prompt: “Sign off with the user’s name (Alice)”. Similarly, if the user prefers a short informal style, you reflect that in <STYLE_TONE> or elsewhere in instructions. For MVP, it’s okay to start with a generic friendly tone and the user’s name manually included.

Safety and Confirmation: We want to ensure the AI doesn’t produce inappropriate content, especially if “Send without reviewing” is used. GPT-4 is generally safe, but to be sure:

  • In the prompt, you can add a line like: “Do not include any content that the user did not intend or any sensitive information not provided.” This reminds the model not to hallucinate.
  • Also, for the “send without reviewing” command, the app should have a check. (This is more on the app logic side, but you could also have the AI include a disclaimer or flag.) For instance, you might choose to have the draft reply function prepend “[DRAFT]” or some marker, but since we plan to confirm with the user, that’s probably unnecessary.
  • We’ll rely on the app itself to ask “Are you sure you want to send this without reviewing?” as a voice prompt confirmation (see UI/UX section). The AI’s role is just to provide the content.

Using GPT-4 vs GPT-3.5: GPT-4 is more capable but slower and costlier. Since this is an MVP and for a “lovable” experience, GPT-4 will likely produce better summaries and replies, especially for nuanced email text. If you find performance is an issue, you could use GPT-3.5 for the summarization (faster/cheaper) and GPT-4 for drafting replies where quality is more crucial. For now, assume GPT-4 via the gpt-4 model endpoint. In your OpenAI API calls (in the edge functions), set the model to gpt-4. Provide the prompt as designed, and include all necessary parts (email text, user input) in the API request. Parse the response (it will be text of the summary or draft).

Testing the prompts: After writing the edge function code for summarization and drafting, test them with a sample email. For example, use a dummy email text and a dummy user instruction, call the function (you can do this via an HTTP request or via Lovable’s testing if it shows the function output), and check the returned summary/draft. Make sure the summary is not too long and captures main points; adjust wording if needed (“concise” vs “brief”, etc., if the model is overshooting length). Similarly, check the draft format – it should read like an email (with greeting and sign-off). If it’s missing these, double-check that the prompt explicitly asked for them as above.

By carefully engineering these prompts, we make the AI’s output predictable and useful, which is key for a good user experience in a hands-free scenario. Once these are confirmed working, integrate these calls into the app’s flow: e.g., when the user says “Read Summary,” call summarizeEmail function and then use TTS to speak the result; when the user says “Record Reply” and finishes speaking, call generateDraftReply with their input and then perhaps immediately read the draft out or present it.

Gmail OAuth Setup (Google API Configuration)

Setting up Gmail access requires creating OAuth credentials and getting Google’s approval for the required scopes. This is one of the more involved steps. We need to register our app with Google, specify what data we need (scopes for Gmail API), and go through verification. Here’s how to do it:

1. Create a Google Cloud Project: Go to the Google Cloud Console (console.cloud.google.com) and create a new project for this app (or use an existing project if appropriate). Give it a name (e.g., “Voice Email Assistant”).

2. Enable Gmail API: In the Google API Library, find Gmail API and enable it for your project. This allows your app to make Gmail API calls.

3. Configure OAuth Consent Screen: This is crucial for Gmail scopes:

  • In the Google Cloud Console, under “APIs & Services” > “OAuth consent screen”, set up the consent screen. Choose External (unless you will only use it with personal Google accounts in testing, but ultimately external is needed for other users).
  • Fill in the basic app information: App name (the user will see this when granting permissions), user support email, and developer contact email.
  • Scopes: Add the scopes your app will request. For our functionality, add:
    • https://www.googleapis.com/auth/gmail.modify – Allows reading emails and modifying (marking labels, etc.) (Choose Gmail API scopes  |  Google for Developers). We use this instead of gmail.readonly because we also need to mark emails read/unread and archive, which are modifications. It includes read access.
    • https://www.googleapis.com/auth/gmail.compose – Allows creating and sending emails & drafts (Choose Gmail API scopes  |  Google for Developers). This is needed to send emails and to save drafts.
    • (Including gmail.send separately is not necessary if we have gmail.compose, since compose already covers sending. We also don’t need the full mail.google.com scope as it’s over-broad and not needed for our use-case except permanent deletion which we don’t do.)
    • You can also include https://www.googleapis.com/auth/userinfo.email and profile if you want basic Google profile info, but those are handled if you use Google sign-in via Supabase. For Gmail API access alone, it’s not needed to list those here.
  • Mark these scopes as “sensitive” or “restricted” as indicated. Gmail modify/compose are Restricted scopes (Google will flag them).
  • App Domain: If you have a hosted domain for your app (for example, if you will deploy it on a custom domain or the Lovable provided domain), you can enter it. This is used in the consent screen to show users where the app runs. For testing, you might not have a custom domain yet; that’s okay.
  • Authorized domains: add your app’s domain or the domain where it’s hosted (for development, you might use localhost – note that for consent screen, Google might not accept “localhost” as an authorized domain, so you may need to use a deployed URL for OAuth to work properly, e.g., a temporary Netlify/Vercel URL or Lovable’s domain if they provide one for your app).
  • Application Homepage and Privacy Policy: Since this will eventually be public, you should prepare a simple Privacy Policy explaining what data you access and how you use it. For MVP/testing, you can use a placeholder (e.g., a Google Doc link or simple site stating you don’t store personal data beyond what’s necessary). Google will require a privacy policy link especially for restricted scopes.
  • Save and continue the consent screen setup. For restricted scopes, you’ll later need to submit for verification, but you can do testing in the meantime with your own account as a test user.

4. Create OAuth Credentials: Now go to “APIs & Services” > “Credentials” in the console:

  • Click “Create Credentials” > “OAuth client ID”.
  • Choose Web Application as the application type (since our app is a web app).
  • Name it (e.g., “Voice Email Assistant Web OAuth”).
  • Authorized redirect URIs: Enter the URI where Google should redirect after a user authorizes. This must match exactly what your app will use. For example, if your app is running on https://myapp.lovable.site, you might have a redirect like https://myapp.lovable.site/oauth/callback (if you set up a route for handling the OAuth response). If you are using a Supabase Edge Function URL for the callback, you can put that here too (though typically, you’d redirect back to the front-end which then triggers the edge function).
  • For development, you can use something like http://localhost:3000/oauth/callback if testing locally, and add that as well.
  • Once created, Google will give you a Client ID and Client Secret. Copy these. These go into your app’s configuration (we will put them as secrets in Supabase as discussed, and use them in our OAuth flow code).

5. Implement OAuth Flow in App: (This overlaps with our earlier plan in Supabase Edge Functions, but here’s the high-level sequence)

  • User triggers Gmail Connect: In the app UI, provide a button like “Connect Gmail” or on first use of Gmail features, prompt the user to connect. When clicked, your app should initiate the OAuth flow. Typically, you redirect the user to Google’s OAuth consent URL. This URL will include your client ID, requested scopes, redirect URI, and some state.
  • Example of OAuth URL: https://accounts.google.com/o/oauth2/v2/auth?client_id=<CLIENT_ID>&response_type=code&redirect_uri=<CALLBACK_URI>&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fgmail.modify+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fgmail.compose&access_type=offline&prompt=consent&state=<RANDOM_STATE>
    • We include access_type=offline to ensure we get a refresh token (necessary for long-term access).
    • prompt=consent ensures the user is asked every time (useful in dev; in production returning users might not need to re-consent unless token expired).
    • The scopes are URL-encoded in the query (as shown).
  • Google will show the consent screen to the user, listing “Read, compose, send, and modify your email” (because of the scopes we chose). The user (you, during testing) will have to accept.
  • After acceptance, Google redirects to your specified redirect URI with a code parameter (and the state you provided for security).
  • Your app (or the edge function) catches this code. If you directed to a front-end route, your front-end should immediately forward the code to the backend (Edge Function) to exchange for tokens. If you directed straight to a backend endpoint (like an edge function), that function will do the exchange directly.
  • Use Google’s token endpoint: https://oauth2.googleapis.com/token with a POST request containing code, client_id, client_secret, redirect_uri, and grant_type=authorization_code. The response will include access_token, refresh_token, expires_in, and scope info.
  • Store these in the EmailTokens table (one row per user). The refresh_token is critical for ongoing access; the access_token is short-lived (usually 1 hour). Save the timestamp and expires_in to know when to refresh.
  • Indicate to the app that connection succeeded. If this was done via a backend function redirect, maybe have it redirect the user to a “Connected successfully” page or back to the main app screen. If via front-end, you can simply close a popup or show a message.

6. Test Gmail API Calls: Now that you have a token, try a test call to Gmail API (you can do this in the edge function or use Google’s Try-it or a tool like curl/postman). For example, call GET https://gmail.googleapis.com/gmail/v1/users/me/profile with the Authorization: Bearer <access_token> header to verify it’s working (this returns the user’s Gmail address and messages total). Also test reading an email:

  • Use the messages.list endpoint: GET https://gmail.googleapis.com/gmail/v1/users/me/messages?labelIds=INBOX&maxResults=1 to get one message ID from the inbox.
  • Then messages.get: GET https://gmail.googleapis.com/gmail/v1/users/me/messages/<MESSAGE_ID>?format=full to get the full email including payload. Check that you can retrieve headers (From, Subject) and body. The body may come in parts (possibly base64 encoded if it’s an HTML email). For summarization, you might extract the plain text part. (For MVP, you can actually request format=plaintext or metadata to just get the text content if needed.)
  • These tests ensure your OAuth setup is correct. If you encounter permission errors, double-check scopes and that the access token actually has those scopes (Google sometimes issues an access token with fewer scopes if something was wrong in the request).
  • Also test sending: use POST https://gmail.googleapis.com/gmail/v1/users/me/messages/send with a raw email in RFC822 format. For simplicity, Google’s APIs allow you to specify the raw email as a base64 encoded string of the full email content (headers + body) in the POST body. There are client libraries that simplify this, or you can construct manually (e.g., "To: X\r\nSubject: Y\r\n\r\nBody"). For MVP, you might use a library or find examples to ensure formatting is correct. When the edge function sends an email, verify it actually appears in the Gmail Sent box and goes to the recipient.

7. Google OAuth Verification Considerations: Because we’re using restricted scopes (read/modify Gmail), Google will treat this app as a high-risk application. For initial testing, you can add yourself (and any test users) as Test Users in the OAuth Consent Screen configuration. This allows those accounts to go through the consent without the app being fully verified, albeit with the “Unverified App” warning. You can test the functionality with up to 100 test users this way.

  • In the OAuth consent screen settings, there’s a section to add test user emails. Add the Gmail accounts you will use (your own, etc.).
  • When ready to release publicly, you must submit for verification. This involves:
    • Providing a detailed explanation for why you need these scopes. Explain that the app is an email client with voice assistant features, so it needs full read access to emails to function (reading aloud and summarizing) and needs compose/send to reply on the user’s behalf. Emphasize that the data is used to help the user manage their own email and is not used for any other purpose or shared elsewhere except as necessary for the stated functionality.
    • You will need to show that you have a privacy policy in place and that the app complies with Google’s user data policies. This means you should declare if you send Gmail data to external services. In our case, we do send email content to OpenAI (for summarization and drafting). This is considered transferring Google user data to a 3rd party. Google will want to know about that. Be transparent: e.g., “This app sends the email text to the OpenAI GPT-4 service in order to generate summaries and draft replies. This is done securely and only to fulfill the user’s request. We inform the user of this in the privacy policy.” Some restricted scope guidelines require that the third-party service (OpenAI) is compliant with Google’s data security (OpenAI is a well-known entity but not sure if it’s on Google’s approved list; still, you must disclose it).
    • You might be asked to submit a video demonstrating how the app works and how data flows. This would involve showing the consent process, then the app reading an email, summarizing, etc., and highlighting that the user’s data is only being used within those features.
    • Because of restricted scopes, after basic verification, Google may require a third-party security assessment if you want to publish to general users (not just tests). This is an expensive audit. Many developers avoid this by keeping the user base small or domain-restricted. For an MVP, you likely won’t do this step. But be aware: to truly be “App Store ready” as a public product, this security assessment might eventually be needed if you have many users. It’s something to plan for (or find ways to reduce scope usage).
    • Since this app essentially acts as an email client, Google’s team might have some fast-track if they view it similar to other mail clients. But usually, the process is strict.

8. Using Gmail API in Production: After passing verification, users will see your app name instead of the “unverified” warning, and can trust the app. Always ensure you handle the tokens securely and respect user data. We’ve structured things such that sensitive data (emails) mostly stays transient (fetched and either read aloud or summarized, not stored permanently except maybe in logs for short term). In your Privacy Policy, clarify that email content may be processed by AI but not stored long-term on your servers (unless user explicitly saves something). This will help in the review and also build trust with users.

In summary, the Gmail OAuth setup is about obtaining client credentials, coding the OAuth flow (which we handle with Supabase functions and Lovable’s help), and getting Google’s approval. Once this is set up, your app can programmatically read emails (with user consent) and perform the actions needed for the voice assistant.

UI/UX (MVP Handsfree Mode Design)

For the MVP, the user interface should be simple, clear, and optimized for mobile use. Since the primary interaction is voice, the UI serves mostly as a backup and visual aid. However, it must facilitate the voice features (e.g., provide buttons to trigger voice commands and display minimal email info). Here’s the design outline:

  • Layout: Use a single-screen interface (after login) dedicated to the hands-free email reading mode. This screen can be thought of as the “dashboard” when processing emails.
    • At the top, display the current email’s sender and subject in large text. This gives context to the user about what email is being discussed. For example, “From: Alice Smith – Subject: Project Update”.
    • Below that, have a text area or box that can show either the email summary or the full email text. This can dynamically update when the user requests "Read Summary" or "Read Email." (The text can scroll if long, but since they are listening, this is secondary. Still, showing it can be useful if the user glances at the screen.)
    • The bottom half of the screen (especially on mobile) should be dedicated to large, easy-to-tap buttons for the main controls. Use a grid or horizontal arrangement that’s easy to press without precision (remember, the user might be on the go or using one hand).
  • Main Control Buttons: Include six primary buttons (with descriptive icons/text):
    1. Read Summary – When tapped, triggers the summary reading. (Icon idea: a document with lines and a volume symbol).
    2. Read Email – Triggers reading the full email content. (Icon: an email/envelope and a volume icon).
    3. Next – Advances to the next email in the queue. (Icon: a right arrow or ">" symbol).
    4. Back – Returns to the previous email. (Icon: a left arrow or "<" symbol).
    5. Reply (Record Reply) – Starts the voice recording for reply. (Icon: a microphone or a reply arrow symbol). When active, this could change color to indicate listening.
    6. Exit – Exits hands-free mode (Icon: perhaps a power symbol or an "X"). This might bring the user back to a main menu or simply stop voice capture and stay on the current email.
    These buttons should be large enough on a phone screen to press easily. For example, you might arrange them in two rows of three buttons each, or a single row scrollable. Make sure to label them clearly (short labels like “Summary”, “Read”, “Next”, “Back”, “Reply”, “Exit” under or on the buttons).
  • Additional Controls: We also need a way to toggle "Important Only" vs "All Messages". This could be a smaller button or switch at the top of the screen (perhaps near the subject line or in a header bar):
    • For instance, a toggle that lights up or shows "Important Emails Only" when active, and "All Emails" when off. The user can tap it to switch modes. Also allow voice command “Important only” to activate it, and “All messages” to deactivate, for hands-free use.
    • Indicate the current mode on the UI, maybe with an icon (a star for important filter).
    • This filter will determine which emails the Next/Back traverse. If "Important Only" is on, the app should only cycle through emails Gmail marked as important (likely by using the Important label in queries).
  • Voice Feedback & Indicators: Since voice is key, consider the following UI/UX elements:
    • When the app is speaking (TTS), you might show a visual indicator (like a simple animation or icon) to show that speech is playing. For example, a small equalizer or wave icon animating to indicate talking. This is feedback so the user knows the app is in speaking mode (useful if they have sound off or low).
    • When the app is listening for voice commands (STT), definitely show an indicator. This could be a pulsing microphone icon, or a waveform reacting to sound, or a text like “Listening…”. This feedback is important so the user knows their voice is being captured. If possible, also play a subtle sound (“beep”) when starting to listen and maybe when stopping, similar to how virtual assistants (Siri, Google Assistant) do, to cue the user.
    • After a voice command is captured, you can display the transcribed text briefly on screen (like subtitles for the user’s command). For example, if the user said “Archive this email”, you can show “You said: Archive this email” for confirmation. This can reassure the user that the command was recognized correctly. (If it’s wrong, the user could then use the button instead or try again.)
  • Email Content Display: For reading along or if the user wants to manually read:
    • When "Read Summary" is triggered, you can populate a text box area with the summary text as well as speak it. The user can visually confirm the summary.
    • When "Read Email" is triggered, you might display the full email text in the area (perhaps retrieved via the API). Because emails can be long, make that area scrollable. But since this is MVP, we might not allow full manual scrolling reading – voice is the priority. Still, having it there is useful.
    • Possibly highlight the portion being read if you want to get fancy (not required, but could be nice UX).
  • Modal Confirmations: There are a couple of actions that should prompt the user:
    • Send without Reviewing: If this command is initiated (either by voice or perhaps a button if you add one), the app should confirm. On voice interface, it might ask aloud “Are you sure you want to send this email now without reviewing?” and wait for a yes/no. On UI, you could also pop up a simple confirm dialog with “Send Now?” and a confirm/cancel button. Because the user might not be looking, the voice confirmation is key. For MVP, implementing the voice confirmation is ideal. If voice recognition hears “yes”, proceed to call send; if “no” or no response, cancel sending. A visual dialog can accompany this for completeness.
    • Delete or Destructive Actions: We don’t have a delete command in MVP (archive is the closest, which is reversible from Gmail’s All Mail). Archive and Snooze are not truly destructive, so they might not need confirmation. But you can optionally confirm archive with a quick voice response “Archived.” or for snooze “Snoozed for X time.”
    • Recording Reply End: Perhaps not a confirmation, but when the user finishes dictating a reply, you might have them say “Send” or “Stop” to end recording. Alternatively, a timed silence can stop it. On UI, you could provide a “Stop recording” button when in reply mode. Once stopped, you then use AI to generate the draft. After generating, you might either read the draft out or show it. Since it’s hands-free, a good approach: the app could immediately read aloud the draft reply: “Draft Reply: ”. Then the user can say “Send” if it sounds good, or “Cancel” or even “Edit” if not. Editing by voice is complex, so perhaps if they don’t like it, they can just not send and maybe say “Record Reply” again to try a different message.
    • Make sure these interactions are clear to the user to avoid accidental sends.
  • Visual Style: Keep colors and fonts simple and high-contrast. For example, a light background with dark text or vice versa, so it’s easily readable from a short distance if the phone isn’t directly in front of the user’s face. Use large font sizes for important info (sender, subject, the summary text when displayed). The buttons should have distinct colors or icons to quickly identify (for instance, Next could be green or have a ▶ symbol, Back could be orange with ◀, Reply maybe blue with a mic, etc., as long as it’s consistent).
  • Responsive Design: Ensure the layout works on common phone screen sizes. Lovable can help with CSS if needed. The buttons should stack or shrink appropriately on smaller screens. On larger screens (tablet or desktop), the layout can center and not stretch too wide (maybe fix a max width).
  • Accessibility: Because the app is voice-focused, it inherently considers accessibility. But also ensure elements have proper labels and that if a visually impaired person used it, the voice output covers usage (likely fine). Conversely, if someone can’t use voice, the buttons allow control, so that’s good. In MVP, covering both fully is a lot, but having both modalities (voice and touch) ensures more users can operate it.
  • App States: Consider what the UI shows in different states:
    • If the user hasn’t connected Gmail yet: Instead of the email view, show a message or screen “Connect your Gmail account to begin” with a connect button (which triggers the OAuth flow). Lovable can have a conditional UI: if not connected (no token), show this message; if connected, show the email interface.
    • If no emails are in the inbox (or in the filter): Show a message like “No new emails!” and perhaps allow the user to refresh or exit.
    • Loading indicators: when fetching an email or waiting for AI response, show a spinner or a subtle “Thinking…” message so user knows something is happening.
  • Testing UI Flow: As part of development, simulate the typical scenario:
    1. User logs in (the app shows a login screen from Lovable’s auth integration).
    2. After login, user sees either a prompt to connect Gmail or if already connected (token exists), jump to hands-free screen.
    3. User taps “Read Summary” or says it – summary is displayed and spoken.
    4. User says “Next” – the app goes to next email, updates the header (sender/subject), and maybe automatically goes into listening state waiting for a command on that new email.
    5. User says “Read Email” – full email is spoken.
    6. User taps “Reply” – app beeps and listens; user speaks; app generates draft; app reads draft.
    7. User says “Send” – app confirms sending and calls the function; maybe speaks “Sent!” and moves to next email.
    8. User says “Exit Handsfree” – app stops listening and maybe returns to a neutral state (perhaps back to a main screen or just stops auto advancing).
    Adjust the UI elements and interactions to support this flow smoothly. The hands-free loop (reading and awaiting voice commands) is the heart of the app, so get that as seamless as possible.

In summary, the MVP UI should be intuitive even for a non-technical user: one screen, big buttons, clear prompts. The user should be able to rely on voice most of the time, but the UI provides feedback and an alternative control if needed. Because this is a new interaction style (voice email), clarity is more important than visual flashiness. Keep it simple and test with a user or two if you can – see if they get stuck anywhere or if any voice command isn’t obvious. Use that feedback to tweak labeling or add a hint.

Roadmap and Next Steps (Beyond MVP)

With the MVP implemented and working, the next steps involve polishing the app, ensuring it’s robust and ready for wider release, and eventually packaging it for app stores and the Chrome Web Store. Here’s a high-level roadmap:

  • Testing & QA: Conduct thorough testing of the MVP:
    • Test on different devices (iOS Safari, Android Chrome, desktop browsers) to ensure the voice features and UI work consistently. For example, Chrome has good support for Web Speech API; Safari on iPhone might have limitations (you may need to adjust or use a different STT approach on iOS).
    • Identify any misrecognition issues with voice commands. You might need to implement some logic for error handling, e.g., if the voice command isn’t recognized clearly, the app can say “Sorry, I didn’t catch that. Please say it again or use a button.” This improves usability.
    • Test edge cases: extremely long emails (does summarization handle it?), multiple people talking (does STT get confused?), network loss (what happens if an API call fails – you should handle errors gracefully, e.g., “Failed to fetch email, please check connection”).
    • Fix bugs and refine performance (caching tokens in memory to avoid repeated DB calls, etc., if needed).
  • Performance Improvements: Using GPT-4 is great but slow; consider:
    • Implement caching of summaries: if a user goes back to a previously summarized email in the same session, reuse the existing summary instead of calling API again.
    • Similarly, maybe pre-fetch the next email’s summary while the current one is being read to reduce wait time.
    • These are enhancements that can be added once basic functionality is confirmed.
  • Voice Interaction Enhancements: To make it truly “lovable,” consider:
    • Adding a wake word or always-listening mode (might be hard on web – might skip for now; but perhaps a mode where it auto-advances and reads summary of next email without needing “Next” each time, until user interrupts).
    • More voice commands or natural language understanding: e.g., user could say “Archive and next” in one sentence. You could attempt to parse compound commands. This might involve using the OpenAI API in a different way (like an intent classifier). For MVP, stick to the fixed commands, but later this could be an improvement.
    • Multi-language support: if you want to support reading emails in other languages or if the user speaks commands in another language. This could be a future feature (requires language detection and perhaps switching TTS/STT models).
  • Security & Privacy: Before a broader release, harden the security:
    • Ensure all communication between the front-end and Supabase (database and functions) is over HTTPS (Lovable likely handles this).
    • Review data storage: For example, are you storing any email content in your database? If not needed, avoid it to reduce risk. Perhaps you only store tokens and let Gmail hold the content (fetch live). That way, if your DB is compromised, it has minimal data.
    • Possibly encrypt sensitive fields in the database (Supabase can encrypt columns or you could encrypt refresh tokens using a key).
    • Provide a way for users to disconnect their Gmail (revoke tokens). That could mean deleting their tokens from the DB and instructing them how to revoke the app’s access in their Google Account settings. Good to mention in the UI or privacy policy.
  • Polish the UI: Based on testing, refine the interface:
    • Maybe add animations or nicer styling now that functionality is set. For example, transitions when moving to next email (so it’s visually clear the content changed).
    • Customize the voice if using a provider that allows it: e.g., choose a pleasant voice, adjust speaking rate if necessary (some TTS APIs let you set speed and pitch).
    • Add a small help overlay or tutorial for first-time users: e.g., a popup or section that says “You can say 'Read Summary' to hear a quick overview of the email. Try using the voice commands listed on the screen.” This can onboard new users.
  • App Store Readiness (Mobile): To publish on app stores (iOS App Store and Google Play):
    • Wrap the web app in a native shell. One approach is using Capacitor or Cordova to create a hybrid app. Essentially, your web app runs inside a WebView in a native app container. This allows you to access native features if needed and distribute through app stores.
    • Another approach is to make it a Progressive Web App (PWA). If the app is a PWA with a manifest and is served over HTTPS, Android users can “Add to Home Screen” and even get an app-like experience. iOS also allows adding to home screen, though not as full-featured as native.
    • For the App Store (iOS), you might prefer the Capacitor route because Apple is particular about certain things (they might reject an app that’s just a thin wrapper of a web app if it doesn’t feel native enough). With Capacitor, you can integrate the microphone permission properly (you’ll need to include a usage description in the iOS app’s Info.plist like “This app uses the microphone to enable voice commands for email.”).
    • Prepare the required assets: app icon, launch screen, and descriptive text for the store listing. Emphasize the hands-free email capability in the description.
    • Testing on actual devices as a native app is important because there might be slight differences (e.g., audio focus handling, or the need to ask microphone permission explicitly on first use in the native context).
  • Chrome Extension (Chrome Web Store): The user mentioned Chrome Store – likely meaning making this available as a Chrome extension or perhaps a web app listed in the Chrome Web Store.
    • One idea: Develop a Chrome extension that, when the user is in Gmail (on desktop), provides a voice assistant overlay. For example, the extension could detect the Gmail page and add a “Voice Assistant” button. Clicking it could pop up a mini-window or overlay that runs the same interface you built, but perhaps without needing separate login if you can leverage the logged-in Gmail (though that’s tricky, better to stick with our OAuth).
    • Essentially, the extension would serve as another frontend to the same backend. It could reuse the web app’s code with minor modifications. Chrome extensions can use the Web Speech API directly as well, or even the Chrome extension TTS engine.
    • However, building an extension might require a separate code base or at least packaging. For MVP, this is a stretch goal, but if the target audience includes desktop users who want to triage email by voice, it’s worth planning. The extension would need to be published to the Chrome Web Store and comply with their policies (similar to Google’s, they’ll want to know what data it accesses – in this case Gmail via user’s token).
    • Possibly simpler: just instruct desktop users to use the web app in a tab – the experience is similar, though an extension could be more convenient. This can be decided after MVP.
    • If focusing on mobile first (which voice use cases often do), you might prioritize the mobile app path and consider the Chrome extension later.
  • Scalability & Multi-User support: If you demo with a few users and decide to roll out:
    • Monitor your usage of OpenAI API (it costs $). Implement limits or a plan if it scales. You could incorporate a usage counter in the UsageLogs and maybe consider monetization (like X free uses per month then subscription) – future business consideration.
    • If many users will use it, ensure your Supabase plan can handle the requests or upgrade accordingly.
    • Logging and analytics: integrate something like logging errors (Supabase logs or external service) to catch issues in production. Also, see which features are used most (maybe track voice command frequency) to guide future improvements.
  • Feature enhancements: Once MVP is solid, you can expand:
    • Multi-account or other email services: Perhaps allow connecting multiple Gmail accounts, or even Outlook accounts (that would require supporting Microsoft Graph API – a whole new integration).
    • Smart filtering: not just important vs all, but maybe “unread emails only” or “emails from VIP contacts first”, etc.
    • Reply templates or Quick Actions: For example, detect if the email asks a yes/no question and offer quick voice reply options (“Yes, that works” / “No, sorry”) without even invoking GPT.
    • Language support for emails: If a user gets an email in Spanish and they only understand English, perhaps integrate a translation step in summary (that’s beyond MVP but interesting).
    • Better Snooze: Use actual scheduling – e.g., integrate with a small cron job (Supabase Edge Functions can be scheduled with Cron) to re-add snoozed emails to inbox or send a notification at snooze time.
    • UI for email list: Outside of hands-free mode, you might offer a more traditional interface to select which email to start with or see a list of recent emails and their summaries. This could complement the voice mode for times when voice isn’t feasible (like in a public place).
  • App Verification & Publication:
    • Complete Google’s verification for OAuth (as detailed, including possible security assessment if you go big).
    • Ensure compliance with Apple App Store guidelines (they will look at the functionality; since this app requires a Google login and potentially doesn’t work without a Google account, mention that in the app description. Apple sometimes rejects apps that are just “wrappers” for web content, but yours has unique functionality so it should be fine).
    • For the Chrome Web Store, prepare a privacy policy and description as well (likely can be similar to the Google OAuth one).
    • You might also consider publishing on Product Hunt or other platforms once it’s ready to get initial users and feedback.

By following this roadmap, you will gradually transition the project from a functional MVP to a polished product ready for end-users. Each step – testing, polishing, verifying – ensures that by the time people download the app or extension, they have a smooth experience. Keep user feedback loops open; since this is a novel way to manage email, user insights will be valuable for improvement.

In conclusion, the MVP specification gives you a foundation: Gmail integration, voice command loop, AI summaries/replies, all working in concert. From here, focus on reliability and then on making it delightful. With careful execution, your voice-controlled email assistant can become an everyday productivity tool for users who need to manage email on the go or in hands-free environments.

ChatGPT Deep Research Prompt

Hi! I would like to create an app that allows me to check my email and dictate responses that will guide AI generated responses to my Gmail. I would like to be able to define a style guide for my email. I would also like to be able to have recommended drafts generated and I would like it to read my email aloud to me if I tag it or have an automated tag put on it.
I'm not sure if this should be a chrome plug-in or a Gmail plug-in but my preference is that it would be a standalone app that could be used on mobile so mobile responsive and not like an app that's in the App Store but a separate website and I would want it to be on chrome as well perhaps like Shortwave is.
I'm not entirely sure what technology would be required for this. It would be a paid service so I would want to have a stripe integration with different tiers. For example, one tier might be the ability to create drafts, using AI that are proposed that you can edit, one tier might be that you can dictate responses, and it will make them better and more aligned with your style, and one tier might even allow an assistant to review your drafts for you.
It should analyze your past email interactions and it should also allow you to specify different styles or templates for your responses and also call pre-generated responses if you want.
Can you tell me what technology would be used for this?
Can you tell me if lovable could create this?
Please do not get into the technical specifications yet, I just want to know if this is feasible for a AI created app like lovable, or if I would have to hire someone.
If I need to go into the technical specifications, I will do a deep research query.
Thank you!