Virtual Engagement Officer Overview
Autonomous AI Overview
The Virtual Engagement Officer (VEO) uses autonomous AI to independently manage donor relationships through a series of 8–12 touchpoints annually, guiding donors toward the natural outcome of giving.
The VEO operates using two distinct, synchronized components that work together to make decisions and communicate with donors without requiring ongoing human input.
Moves Management Model
Core Function
The Moves Management Model is a fully autonomous AI decision engine that analyzes donor portfolios to determine the precise next action in the fundraising moves management cycle.
It is trained on fundraising best practices and uses synthetic data generation and predictive modeling to evaluate:
- Donor relationship stage
- Engagement history with the VEO
- Giving patterns and readiness indicators
Based on this analysis, the model identifies the optimal move and timing to advance each donor relationship strategically.
This engine operates independently and does not require human input for decision-making.

Move Categories
- Introduction
- Cultivation
- Ask
- Ask More
- Stewardship
Sub-Purpose Options
- Engage
- Invite
- Educate
- Introduce
- Solicit
Decision Inputs
The Moves Management Model evaluates decisions using:
- Donor data (organization-provided data plus publicly available data, such as geographic wealth indicators)
- Previous interactions and engagement history
- Donor relationship stage and readiness indicators
Communication Method Selection
Once a move is selected, the AI determines the optimal delivery channel:
- Text message
- Robotic handwritten note
Content Generation (LLM)
Core Function
The VEO’s Large Language Model (LLM) serves as the content creation engine. It translates strategic decisions from the Moves Management Model into personalized donor communications.
The LLM synthesizes:
- The selected move and sub-purpose
- Comprehensive donor intelligence
- Historical interaction context
- Organization-specific guidelines
It then generates tailored content that aligns with the organization’s voice and deploys it directly through the selected communication channel.
Prompt Formulation Process
The LLM builds prompts using:
-
Selected move and sub-purpose
Organization-provided donor data
- All donor PII is anonymized when passing through the LLM
-
Personalization is added back before deployment using data tags
Historical interaction context
System settings and constraints, including:
- Guardrails (content boundaries, e.g., “do not discuss genetic research”)
- Portfolio context (strategic direction, priorities, and prohibited actions, e.g., “prioritize volunteer opportunities,” “promote the April Gala when relevant,” “use only ABCD, not ABC”)
Content Development
The LLM:
- Accesses the organization’s secure knowledge base
- Generates personalized message content
- Deploys the message directly to the donor via the selected channel
Organization Knowledge Base
The VEO’s LLM accesses a secure, organization-specific knowledge base to generate content. This knowledge base is built through:
- Seeding the VEO with marketing communications such as solicitations, event invitations, impact stories, and newsletters
- Regular scraping and crawling of approved organization websites
- Social media integrations from approved organization social media handles
The VEO can only reference organization-generated content. For example, if the organization appears in the news, the VEO can only share that information after it has been published on one of the organization’s official channels.
Key Safeguards
- Human-in-the-loop verification
- VEOs can receive donor replies and are often designed to elicit responses
- All VEO responses in two-way engagement are reviewed by the Version2 Operations Team before deployment
-
Reviews verify accuracy, source integrity, and appropriate human handoff recommendations
Guardrails
-
Systematic constraints defining messaging boundaries
Portfolio Context
- Strategic parameters that guide VEO focus areas and limitations