AI Policy
Artificial Intelligence Policy
Effective Date: 12/01/2025
AI-001
Purpose Statement
The purpose of this Artificial Intelligence (AI) Policy is to establish guidelines and ethical standards for the development, deployment, and utilization of AI technologies within The University of Texas Rio Grande Valley (UTRGV). This policy aims to ensure that AI applications enhance academic, research, medical school and health training programs, medical clinical practices, and medical facilities while respecting legal requirements, ethical norms, and international standards. UTRGV fosters innovation while protecting privacy, fairness, security, and academic integrity. The University adopts principles of fairness, transparency, accountability, human oversight, privacy-by-design, security-by-design, and continuous improvement. The Code of Ethics for Responsible AI is incorporated within this document.
Scope and Applicability
This policy applies to faculty, staff, students, contractors, and vendors who design, procure, configure, or deploy AI systems for university purposes, including pilots and research projects with operational impact.
Definitions
To ensure a clear understanding and consistent application of this policy, the following definitions are provided:
- Artificial Intelligence (AI): A branch of computer science that involves the creation of systems capable of performing tasks that typically require human intelligence. Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), problem-solving, self-correction, and understanding natural language. AI applications can include expert systems, natural language processing (NLP), speech recognition, and machine vision.
- Machine Learning (ML): A subset of AI that provides systems the ability to automatically learn and improve from experience without being explicitly programmed. Machine learning focuses on developing algorithms that can process large amounts of data to identify patterns and make decisions.
- Deep Learning: A specialized form of machine learning that uses neural networks with many layers (hence "deep") to analyze various levels of abstraction in data. It is particularly effective in tasks such as image and speech recognition.
- Algorithm: A set of rules or instructions given to an AI system to help it learn, make decisions, or solve problems.
- Neural Network: A series of algorithms that attempt to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. It is the foundation of deep learning models.
- Natural Language Processing (NLP): A field of AI that focuses on the interaction between computers and humans through natural language. The goal is to enable computers to understand, interpret, and respond to human language in a valuable way.
- Ethics in AI: A framework of principles and guidelines to ensure that AI technologies are developed and used in ways that are fair, transparent, accountable, and beneficial to society.
- Bias in AI: Systematic and unfair discrimination in AI systems that can arise from biased data, algorithms, or practices, leading to unjust outcomes.
- Data Protection: Measures and processes to ensure the privacy and security of data used in AI systems, adhering to relevant legal and ethical standards.
- Artificial General Intelligence (AGI): A type of AI that has the ability to understand, learn, and apply knowledge in a way that is indistinguishable from human intelligence. AGI can perform any intellectual task that a human can do.
- Supervised Learning: A type of machine learning where the model is trained on labeled data, meaning the input data is paired with the correct output. The model learns to make predictions based on this training data.
- Unsupervised Learning: A type of machine learning where the model is trained on unlabeled data and must find patterns and relationships in the data without any guidance on what the output should be.
- Reinforcement Learning: A type of machine learning where an agent learns to make decisions by taking actions in an environment to maximize some notion of cumulative reward.
- Computer Vision: A field of AI that enables computers to interpret and make decisions based on visual data from the world, such as images and videos.
- Robotics: The branch of technology that deals with the design, construction, operation, and application of robots, often incorporating AI to enable autonomous behavior.
- Generative AI: A type of AI that can create new content, such as text, images, or music, based on the data it has been trained on. Examples include language models like GPT-3 and image generation models like DALL-E.
- Explainable AI (XAI): AI systems designed to provide human-understandable explanations for their decisions and actions. This is crucial for ensuring transparency and trust in AI systems.
- Federated Learning: A machine learning technique that trains an algorithm across multiple decentralized devices or servers holding local data samples, without exchanging them. This helps in maintaining data privacy.
- Personal Data: Personal data refers to any information that is linked or reasonably linkable to an identified or identifiable individual, including names, contact details, identification numbers, and digital identifiers. Under Texas law, this also encompasses sensitive data such as racial or ethnic origin, religious beliefs, health information, biometric identifiers, precise geolocation, and data of children under 13. For AI systems, personal data includes any such information used for automated decision-making, profiling, or inference, requiring compliance with Texas Data Privacy and Security Act (TDPSA) and Responsible Artificial Intelligence Governance Act (TRAIGA).
By providing these definitions, UTRGV aims to create a shared language for discussing and implementing AI policies and practices.
Roles and Governance
UTRGV designates AI Risk Officer (AIRO) and establishes an AI Governance Committee comprising of representatives from Legal, Privacy, Human Resources, Information Security, Information Technology, Research, Healthcare, Procurement, Academic Affairs, and other areas of the institution as deemed necessary.
- The AIRO ensures that an AI Inventory is established and maintained, classifies risk, coordinates assessments and monitoring, and reports on incidents and program maturity.
- The AI Governance Committee sets standards, reviews heightened‑risk deployments, approves assessment instruments, ensures mandatory training is provided, and assists in establishing required controls.
- The AI Governance Committee is chaired by the AIRO and established the committee’s structure.
- The AI Governance Committee does not replace existing governance structures; it serves to enhance overall governance by ensuring the necessary guardrails are in place related to artificial intelligence use at UTRGV.
- Units remain responsible for the ethical and compliant use of AI within their operations.
- Users are responsible for abiding by all established laws, rules, regulations, policies, and other university requirements related to the ethical and appropriate use of AI.
Principles and General Guidelines
The University of Texas-Rio Grande Valley (UTRGV) is committed to fostering a responsible and ethical environment for the development, deployment, and utilization of Artificial Intelligence (AI) technologies. By following these principles, UTRGV aims to promote transparency, accountability, fairness, and respect for privacy and intellectual property. All users are expected to employ AI technologies responsibly and to contribute to a culture of ethical AI practices.
- Ethical Use: AI technologies must be developed and used in ways that are consistent with ethical standards, human rights, and applicable laws. All users must employ AI technologies in an ethical manner that aligns with the University's code of conduct and values. Discriminatory, biased, malicious, or harmful use of AI is strictly prohibited. All AI development projects, applications or the use of AI driven automated decision making must undergo an ethical review by the data owner to assess their potential impact on individuals, communities, and society at large. Ethical use issues should be directed to the appropriate university office for review and resolution.
- Privacy and Data Protection: Personal data used in AI applications must be handled with the utmost care. Users must respect the privacy rights of individuals and handle personal, sensitive, and confidential data in accordance with relevant laws, regulations, and University policies.
- Intellectual Property: Users are expected to follow all UTRGV and UT System guidance and all applicable intellectual property and copyright laws, including respecting intellectual property rights related to AI technologies, including software, algorithms, models, and datasets. Unauthorized distribution, copying, or modification of AI-related resources is prohibited.
- Transparency and Accountability: The development and use of AI systems must be transparent, and the responsibility for AI system outcomes must be clearly defined. Any decision-making process involving AI must be explainable and accountable. The following must also be addressed:
- Provide clear user/public notices where applicable and do not misrepresent AI as human.
- Label synthetic media and use content provenance measures (e.g., watermarking/metadata) where feasible for generative outputs.
- Maintain system/model cards for applicable public‑facing systems.
- Offer complaint and redress channels with published response expectations.
- Security: Users must take measures to ensure the security of AI systems, including protecting access to credentials, implementing security patches, and following all laws, regulations, UTRGV policies, UT System requirements, Information Security Office requirements, and industry best practices to prevent unauthorized access or breaches.
- Fairness and Non-Discrimination: AI algorithms and models must be designed and tested to minimize bias and discrimination. They should promote fairness and equal treatment for all individuals and groups.
- Stakeholder Involvement: To ensure a more inclusive and comprehensive approach, the involvement of relevant stakeholder groups is essential. Where appropriate, the participation of students, faculty, staff, administrators, regulators, and or others deemed appropriate external parties that may be affected is encouraged.
- Education and Research: UTRGV encourages education and research in AI and related fields. We promote a culture of responsible AI development and dissemination of AI knowledge.
Responsible AI Usage
- Bias and Fairness: Users should strive to eliminate bias from AI systems and promote fairness in their applications. Regularly assess and mitigate bias to ensure equitable outcomes. Developers must actively work to identify and mitigate bias in AI models and algorithms. Continuous monitoring and testing are essential to ensure fairness.
- Accuracy and Reliability: Users are responsible for ensuring that any AI-generated content they choose to use has been independently verified for accuracy and reliability. While users cannot guarantee the correctness of AI outputs, they must exercise due diligence by reviewing, validating, and confirming the content before applying it in any critical or official context.
- Accountability: Individuals using AI technologies are accountable for their actions and decisions facilitated by AI systems. Do not solely rely on AI recommendations when making important choices.
- Data Usage: Data used in AI applications must be collected in accordance with existing UTRGV and UT System guidance and all applicable laws including appropriate notices or consents. Contact the Information Security Office for further guidance related to the acceptable use of data in AI systems.
- Security and Privacy: Adequate security measures must be in place to protect AI systems and the data they use. Privacy must be upheld throughout the AI lifecycle.
- Accessibility: AI applications and systems should be designed to be accessible to all, including individuals with disabilities.
Prohibited Activities
- Users must refrain from engaging in any conduct that violates UTRGV, UT System, State of Texas or United State policy, regulation, or law.
Permissible Data in AI and Data Protection Requirements
- Controlled and Confidential Data: Controlled and confidential data may only be used with approved AI tools that have been evaluated by IT and the Information Security Office (ISO). This ensures compliance with relevant laws, regulations, and University policies regarding sensitive information.
- Public Data: Public data can be used in various AI tools, but all AI tools must undergo and pass the necessary IT and ISO technology assessment processes. This guarantees secure and responsible handling of even non-sensitive data.
Risk and Impact Assessments (Heightened‑Risk)
AI systems can introduce new risks when deployed—such as security vulnerabilities, inaccurate outputs, or operational failures. Completing an AI Risk Assessment before deployment and after major changes helps identify and mitigate these risks early. This ensures the system is safe, reliable, and aligned with organizational standards, reducing the chance of harm to users, data, or operations.
- Before deployment and after material changes, complete an AI Risk Assessment (security risks, limitations, foreseeable harm(s), and operational metrics: accuracy, latency, uptime, error rate).
- Complete an AI Impact Assessment addressing stakeholders, training data description, ownership of monitoring and cadence, and retention/deletion methods.
- Provide a Human Oversight Plan for consequential uses (reviewer qualifications, QA sampling, escalation thresholds, authority to stop/override).
- Obtain approval from the AI Governance Committee and AIRO sign‑off prior to deployment.
Evaluation and Adversarial Testing
- Define and execute evaluation plans covering accuracy, robustness, fairness, toxicity, and privacy leakage with thresholds appropriate to context.
- Conduct adversarial testing for prompt injection, jailbreaks, model extraction, data poisoning, and privacy leakage; document mitigations within secure SDLC.
- Additional design consideration can be found in the Systems Design Guidelines, individual division or college AI policies, UTRGV Information Technology Desing Standards, UTRGV Security Program Manual, and accessibility standards.
Monitoring and Incident Response
- Monitor production Key Performance Indicators ( KPIs) such as accuracy, fairness, toxicity, latency, uptime; set rollback/retraining triggers and sunset criteria. KPIs serve as benchmarks to track progress towards goals or outcomes.
- Maintain incident taxonomy and contact matrix; notify regulators/partners as applicable and conduct post‑incident reviews.
- Reassess heightened‑risk systems at least semiannually and after material changes; maintain required records and logs.
AI Inventory and Risk Classification
All AI systems must be registered in the AI Inventory maintained by the Information Security Office prior to pilot or deployment. Systems are classified by risk through the Information Security Offices Risk Assessment process; ‘heightened‑risk’ systems include those that may materially affect rights, opportunities, safety, or access to services.
Compliance with Federal, State, and UT System Regulations
AI use at UTRGV must comply with relevant Federal laws, State of Texas regulations, UT System policies, and UTRGV rules. Key references include:
- Federal Regulations: The policy adheres to guidelines from the Federal Trade Commission (FTC) regarding AI and data protection.
- State Regulations: Compliance with Texas state laws and regulations on data protection, privacy, technology, and AI development.
- UT System Policies: Aligning with UT System’s policies on technology use and ethical standards.
- UTRGV Rules: Incorporating UTRGV-specific policies on academic and research integrity, UTRGV Information Technology Standards, Information Security Program Manual and other Information Security Standard, and all other UTRGV policies related to systems and services, as applicable.
International Requirements and GDPR Compliance
UTRGV recognizes the importance of adhering to international AI regulations, including European AI rules and the General Data Protection Regulation (GDPR). Key considerations include:
- GDPR Compliance: AI systems must comply with GDPR requirements regarding data protection and privacy.
- European AI Rules: AI development and use must align with European ethical standards and guidelines.
- Global Standards: Incorporating best practices from international AI frameworks and guidelines, as well as legal and regulatory standards as applicable.
When personal data are processed, UTRGV applies privacy-by-design and security-by-design, conducts necessary privacy reviews, and respects the rights of individuals.
- Where GDPR applies, complete a DPIA addendum covering lawful basis per purpose, Art. 9(2) (if special-category data),>Where the EU AI Act high‑risk/GPAI requirements apply, attach the EU module with QMS linkage, conformity‑assessment documentation index, post‑market monitoring, serious‑incident reporting, and supply‑chain role mapping.
Additional Requirements
- All non-student UTRGV account holders with access to UTRGV resources must complete AI training provided by the Information Security Office annually.
- All systems offering AI technology as part of their service, application, appliance, or software must undergo a Technology Assessment through UTRGV approved workflows before purchase (including open-source or free products or services) and at required intervals in accordance with state or federal laws, rules, regulations, UT System policy, or UTRGV policy.
- The development of AI solutions must comply with all approval and change management procedures as stipulated by law, regulation, UTS 165, and UTRGV policy. Additionally, all development work must adhere to the guidelines set forth in the Implementation Steps and Procedures for AI Technologies and any supplementary requirements issued by the Information Security Office.
- Procurement and Vendor Requirements must be followed related to the purchase of any AI related items and must, at a minimum, include:
- Contracts must require alignment with NIST AI RMF, transparency documentation (system/model cards), material change notifications, security posture, and audit rights.
- For generative AI, require content provenance capabilities where feasible and fairness/robustness evaluation evidence.
- Ensure periodic reassessment for heightened‑risk vendor systems.
This policy is intended to guide the responsible and ethical use of AI technologies at UTRGV, ensuring that AI contributes positively to academic and research endeavors, medical practices, and the overall welfare of the university community. Non‑compliance may result in suspension of system access or use and other actions consistent with university policy.
Related and Reference Documents
- NIST. AI.600-1 - Artificial Intelligence Risk Management Framework: Generative AI
- NIST AI RMF Playbook
- EU Regulation 2024/1689 – Artificial Intelligence Act
- EU-GDPR – Article 27, Article 37, Article 39
- Texas Senate Bill 1964 ® TAC 219 (Proposed) Sec. 2054.701 & 2054.003
- IAPP AI Governance Best Practices Report 2024
- IAPP Responsible & Ethical AI Guidance
Review and Updates
This policy will be reviewed at least annually or upon significant regulatory or technological change.