This AI Governance & Output Controls Framework describes how LitiSync, Inc. (“LitiSync,” “we,” “our,” or “us”) governs the use of artificial intelligence (“AI”) technologies within the LitiSync platform (the “Service”) to support responsible, transparent, and attorney-directed use of automated processing tools.
This framework should be read together with the applicable Master Service Agreement, Terms of Service, Privacy Policy, Data Processing Agreement, and other governing documents.
1. Purpose
The purpose of this framework is to define governance, operational safeguards, and review controls designed to support responsible use of AI-generated materials in a manner consistent with applicable legal, ethical, and professional responsibility expectations.
AI features are designed to assist attorneys and legal professionals. They are not designed to replace professional legal judgment.
2. Role of AI in the Platform
AI technologies within the Service may be used to:
Organize intake information and case-related materials
Generate summaries of intake communications
Create structured chronologies and profiles
Assist in drafting informational or organizational materials
Support workflow automation and case preparation processes
AI-generated outputs are informational work products intended to assist attorneys and authorized legal professionals.
The Service does not provide legal advice, and AI-generated outputs do not constitute legal advice.
3. Attorney Supervision and Review Requirement
All AI-generated materials produced through the Service are intended to be reviewed, validated, and approved by participating attorneys or authorized legal professionals prior to legal reliance, submission, or external distribution.
Participating law firms remain solely responsible for:
Determining the accuracy and completeness of AI-generated outputs
Editing or modifying outputs as appropriate
Ensuring compliance with professional responsibility rules
Supervising any automated workflows used in client matters
The platform may require acknowledgment, confirmation, or user interaction prior to finalizing certain AI-generated materials.
4. Transparency and Labeling
Where supported, the platform may identify or label AI-generated materials to distinguish them from human-authored content.
Such labeling is intended to support internal review workflows and responsible oversight.
5. Data Handling and Confidentiality
Information processed by AI systems is handled in accordance with applicable platform privacy, security, confidentiality, and data protection controls.
AI processing is conducted to provide platform functionality and is governed by applicable customer agreements.
LitiSync does not use customer matter data to train public AI models or for unrelated commercial purposes except as permitted by applicable agreements and law.
6. Output Risk Mitigation and Controls
LitiSync maintains governance practices designed to reduce the risk of inaccurate, incomplete, or inappropriate automated outputs, which may include:
Structured prompt and workflow controls
Defined data input schemas
System-level monitoring and evaluation processes
Controlled rollout of AI-enabled features
Internal testing prior to feature deployment
Despite these safeguards, AI systems may produce errors, omissions, or unexpected results.
7. Limitations of AI Outputs
AI-generated outputs:
May contain inaccuracies or incomplete information
May require contextual legal analysis not reflected in the output
May not reflect jurisdiction-specific legal nuances
Should not be relied upon without professional review
LitiSync does not guarantee the accuracy, completeness, or suitability of AI-generated outputs for any specific legal purpose.
8. Customer Responsibilities
Participating law firms and users remain responsible for:
Reviewing AI-generated materials prior to legal use
Applying independent professional judgment
Ensuring compliance with applicable professional responsibility and ethics rules
Determining the appropriate degree of reliance on automated outputs
Maintaining supervision consistent with jurisdictional rules governing technology use
9. Regulatory and Ethical Alignment
LitiSync designs AI-enabled features with awareness of evolving legal ethics guidance and regulatory expectations.
Participating law firms are responsible for evaluating whether use of AI tools within their jurisdiction complies with applicable professional responsibility rules.
10. Continuous Governance and Improvement
LitiSync periodically reviews AI-related operational practices and governance controls to support responsible AI deployment and evolving regulatory standards.
AI functionality may be modified, limited, or discontinued as necessary to manage risk or align with regulatory developments.
11. Changes to This Framework
LitiSync may modify this AI Governance & Output Controls Framework from time to time to reflect updates to the Service, legal requirements, regulatory expectations, or operational practices. Unless otherwise required by applicable law, updates become effective upon posting the revised version. Continued use of the Service after the effective date constitutes acceptance of the revised Framework.
12. Contact
Questions regarding this AI Governance & Output Controls Framework may be directed to:
LitiSync, Inc.
AI Governance & Compliance
Support@LitiSync.com
c/o Law Office of Andrea Paparella, PLLC
134 W. 29th Street, Suite 1001
New York, NY 10001-5304
United States
This framework describes governance practices related to AI-assisted platform functionality and does not represent a guarantee of output accuracy, completeness, regulatory compliance, or legal sufficiency. Users remain responsible for professional review and appropriate use of all outputs generated through the Service.

