Artificial Intelligence Acceptable Use Standard

Contents

Purpose

This standard establishes comprehensive requirements and guidelines for the acceptable use of artificial intelligence systems at the University of Central Oklahoma. Operating under the authority of the UCO Technology Acceptable Use Policy, this standard provides detailed implementation requirements specific to AI technologies while leveraging the enforcement mechanisms and governance structures established in the parent policy. The standard ensures AI technologies are deployed responsibly, securely, and in full compliance with applicable laws while supporting the university's educational mission and operational objectives.

Scope

This standard applies to all individuals and entities covered under the Technology Acceptable Use Policy when utilizing artificial intelligence systems. The scope encompasses all faculty, staff, students, contractors, and visitors who access or use AI systems through university resources or in connection with university activities. This includes vendor-provided AI capabilities embedded within approved existing software solutions, UCO approved professional AI service subscriptions, UCO-managed enterprise AI systems, and any approved technology that incorporates artificial intelligence, machine learning, or automated decision-making capabilities. Non-compliance with this standard constitutes a violation of the Technology Acceptable Use Policy and subjects’ violators to the enforcement mechanisms defined therein.

Authority

The Office of Information Technology establishes this standard under the authority granted by the Technology Acceptable Use Policy approved on June 13, 2023. This standard carries the full weight of university policy through its parent document and is enforceable through the same mechanisms, including access revocation, disciplinary action, and legal remedies as appropriate.

Definitions

Artificial Intelligence (AI): Any system, application, or model that processes data to provide outputs using methods beyond traditional rules-based approaches, including machine learning, natural language processing, and automated decision-making systems.

Restricted Data: Means UCO Confidential data such as Personal data, passwords, security codes, names, dates of birth, Social Security numbers, health information, disciplinary actions, student records, financial records, and other university confidential or sensitive information as defined in the UCO Data Classification Policy.

AI Hallucination: Instances where AI systems generate inaccurate, misleading, or fabricated information presented as factual.

Vendor-Provided AI: Artificial intelligence capabilities embedded within third-party software solutions used by the university.

Enterprise AI: AI systems directly managed and supported by UCO for institutional use.

Acceptable Use Guidelines

Educational and Administrative Functions

The university allows the use of AI systems for legitimate educational, research, and administrative purposes that align with institutional objectives.

Academic applications include utilizing approved UCO AI Services for curriculum development assistance, implementing adaptive learning technologies, conducting research data analysis, automating literature reviews, and providing writing assistance for drafts and editing. Faculty members may leverage AI to enhance instructional design, create personalized learning experiences, and improve student engagement while maintaining academic integrity standards.

Administrative operations that can benefit from AI through workflow automation for routine tasks, intelligent document processing and management systems, optimized resource scheduling, advanced data analysis for institutional decision support, and enhanced communication capabilities for official university business. Staff members may use approved AI solutions to improve operational efficiency; however, any AI use must maintain appropriate human oversight and decision-making authority.

Research activities represent a particularly valuable application of AI technologies. Researchers may employ UCO approved AI services and technologies for complex data analysis and visualization, methodology development and validation, collaborative research support across disciplines, and publication preparation assistance. All research applications must maintain the highest standards of research integrity, including proper attribution of AI assistance and transparent disclosure of AI involvement in research processes.

Professional AI Service Usage

Professional AI services provide advanced features including unlimited query processing, access to sophisticated reasoning models, enhanced data processing capabilities, custom model fine-tuning options, and priority access to new features. Users granted professional access to the UCO approved AI solutions should undergo additional training on advanced features and accept heightened responsibility for appropriate use.

Departments seeking enhanced AI capabilities beyond standard institutional access may request professional AI service subscriptions for designated power users, through OIT. These requests must include clear justification of need, identification of specific use cases requiring advanced capabilities, and confirmation of available funding. Department heads must approve all professional subscription requests and accept financial responsibility for ongoing costs. The Office of Information Technology manages subscription procurement, billing, and license assignment to ensure appropriate cost recovery and maintain centralized oversight of AI service deployment.

Prohibited Uses

The following activities are strictly prohibited when using AI systems:

Academic Integrity Violations: Generating student assignments or coursework without proper disclosure, unauthorized manipulation of academic records, providing direct answers to assignments, course work or examinations, facilitating plagiarism or cheating, and misrepresenting AI-generated content as original student work. Students must follow the directions provided by Faculty members for AI usage expectations in course syllabi and assignments.

Security and Privacy Violations: Processing restricted data through unauthorized AI systems, attempting to bypass established security controls, sharing university credentials or access codes, using AI to gain unauthorized access to systems or information, and violating user privacy or confidentiality agreements.

Unethical Applications: Creating or distributing discriminatory content, generating harmful, threatening, or harassing material, producing misleading or false information intended to deceive, violating copyright or intellectual property rights, and facilitating illegal activities or policy violations.

System Misuse: Attempting to reverse engineer or manipulate AI models, using AI systems for personal commercial purposes, overloading systems through excessive or inappropriate usage, and sharing access credentials or accounts with unauthorized users.

Data Classification and Protection Requirements

Data Handling Standards

Before processing any information through AI systems, users must properly classify data according to university standards. This classification determines which AI systems may process the information and what safeguards must be in place. Restricted data, including personally identifiable information, protected health information, and confidential university data, may only be processed through UCO approved AI systems that have undergone comprehensive security and compliance review and maintain appropriate technical controls.

Public and internal use data may be processed through approved AI systems, but users must still exercise judgment about the appropriateness of sharing institutional information with external systems. Even when data classification permits AI processing, users should consider whether AI assistance is necessary and beneficial for the specific task.

Use of AI technology constitutes a creation of a work product.  All work product produced in the use of AI must be saved, exported, or stored on UCO information systems. If you use AI to assist in the creation of a work document for example, that document is subject to all governing policies of the university and should be saved on your UCO One Drive or Department shared drive.

Input Restrictions

Users must exercise extreme caution regarding information provided to AI systems. Under no circumstances should users input Social Security numbers, credit card information, passwords, protected health information, detailed student records, or other restricted data into any unauthorized AI systems. This prohibition extends to information that could indirectly identify individuals or compromise security when combined with other data.

The university recognizes that determining appropriate data inputs can be challenging. Users should follow the UCO Data Classification Policy for determining the types of data and restrictions on use. When uncertainty exists, users should consult with their supervisors or the UCO Information Security team before proceeding. The potential consequences of inappropriate data exposure far exceed any productivity benefits from AI assistance.

Output Verification

AI systems, despite their sophisticated capabilities, remain prone to errors, biases, and hallucinations. Users bear full responsibility for verifying the accuracy and appropriateness of AI-generated content before incorporating it into official university communications, academic work, or decision-making processes. This verification requirement extends beyond simple fact-checking to include evaluation of tone, bias, completeness, and alignment with university values.

All AI-generated content used in official capacity must include clear attribution indicating AI assistance. This transparency requirement ensures stakeholders understand the role of AI in content creation and can evaluate information accordingly. Users who fail to verify AI outputs or properly attribute AI assistance may face disciplinary action for any resulting errors or misrepresentations.

Security Requirements

Access Control Standards

AI system access requires appropriate authentication through university credentials. Users must not share login credentials or provide access to unauthorized AI services, tools, plug-ins, or functions.

Monitoring and Compliance

The university actively monitors AI system usage to ensure compliance with this standard, identify potential security incidents, and optimize resource allocation. This monitoring includes automated detection of anomalous usage patterns, regular audits of access logs, and investigation of reported concerns. Users should expect that their AI interactions may be reviewed for compliance purposes. The use of technology creates a record of its use, with AI that is no different. The university must comply with Oklahoma State Statues, the Oklahoma Open Records Act and managing system information that is subject to discovery and disclosure when required.

Monitoring activities respects user privacy while maintaining security. Users who attempt to circumvent monitoring systems could be in violation of the Technology Acceptable Use Policy.

System Configuration

To maintain system security and stability, users may not modify, disable, or circumvent security controls on AI systems. All configuration changes must follow established change management procedures with appropriate testing and approval. This prohibition extends to browser extensions, third-party integrations, or any modifications that could compromise security controls. This extends to the use of prompts or data that utilize AI services for other than what the AI service was designed.

The Office of Information Technology and vendors are regularly updating security and systems configurations for AI to address emerging threats and vulnerabilities. Users must accept and adapt to these changes as part of their continued access to AI systems.

User Responsibilities

Training and Awareness

Users should complete AI awareness training before accessing AI systems and participate in annual AI refresher training. Users must stay informed about policy updates and changes to AI system capabilities or restrictions.

Responsible Usage

Users must approach AI as a powerful tool requiring thoughtful application rather than a replacement for human judgment and expertise. Responsible usage includes understanding AI limitations, maintaining realistic expectations, and applying appropriate skepticism to AI outputs. Users should use AI to enhance their capabilities rather than abdicate their responsibilities.

Intellectual property considerations require particular attention. Users must respect copyright, provide appropriate attribution, and ensure AI assistance does not infringe on others' rights. The complexity of AI training data and output generation creates unique challenges for intellectual property compliance that users must navigate carefully.

Incident Reporting Requirements

Prompt incident reporting enables rapid response to security threats, policy violations, and system issues. Users must immediately report suspected security incidents, including unauthorized access attempts, data exposure, or system compromises. Policy violations, whether observed or suspected, require similar immediate reporting to prevent continued inappropriate use.

System malfunctions, consistent errors, or concerns about AI accuracy also warrant prompt reporting. Early identification of systemic issues enables corrective action before widespread impact occurs. Users should err on the side of over-reporting, as the Information Technology Help Desk can quickly assess and prioritize concerns.

Compliance and Enforcement

Standard Violations

Violations of this standard constitute violations of the Technology Acceptable Use Policy, subjecting violators to the full range of disciplinary actions defined in that policy. Potential consequences include formal warnings, access revocation, academic sanctions for students, employment actions for staff, and legal prosecution for severe violations. The university applies progressive discipline when appropriate but reserves the right to impose severe sanctions for egregious offenses.

Enforcement actions consider factors including violation severity, user intent, actual harm caused, and cooperation with investigations. Users who self-report violations or assist in identifying systemic issues may receive more lenient treatment than those who attempt concealment or obstruction.

Regulatory Compliance

The complexity of AI systems intersects with numerous regulatory requirements that users must respect. The Family Educational Rights and Privacy Act (FERPA) governs any AI use involving student records. Health Insurance Portability and Accountability Act (HIPAA) requirements apply when protected health information is involved. The Gramm-Leach-Bliley Act (GLBA) which include a privacy rule and technical safeguard requirements for financial records, information, and related systems.  The Americans with Disabilities Act (ADA) accessibility requirements ensure AI systems remain usable by all community members.

Oklahoma state statues and any specific AI requirements, add additional compliance obligations. Federal research regulations may apply to AI use in sponsored research. Users bear responsibility for understanding which regulations apply to their specific use cases and ensuring full compliance.

Risk Assessment Requirements

Before deploying new AI systems or significantly modifying existing implementations, comprehensive risk assessments must be performed to evaluate security implications, privacy impacts, regulatory compliance, and ethical considerations. These assessments follow established university procedures and involve appropriate stakeholders including Information Security, Legal Counsel, and affected departments.

Risk assessments are not one-time events but part of continuous improvement processes. Regular reviews ensure continued alignment with university risk tolerance and evolving threat landscapes. Users should expect and support these ongoing assessments as essential to maintaining secure and compliant AI operations.

Governance

Approval Processes

The structured approval process for AI systems ensures appropriate oversight while enabling innovation. New enterprise AI systems require review through the Technology Advisory Committee, including technical architecture evaluation, security assessment, and business case validation. Vendor-provided AI capabilities undergo similar scrutiny through established procurement and technology review processes.

Professional AI subscriptions follow a streamlined approval process through departmental channels, recognizing that these services undergo vendor security reviews while still requiring appropriate institutional oversight.

Review and Updates

This standard undergoes comprehensive annual review to address technological advances, regulatory changes, and institutional needs. The review process includes stakeholder consultation, benchmarking against peer institutions, and incorporation of lessons learned from incidents or violations. Updates may occur more frequently when significant changes warrant immediate action.

Communication of updates follows established university channels, including direct notification to affected users, posting on official websites, and incorporation into training materials. Users bear responsibility for maintaining awareness of current requirements and cannot claim ignorance of published updates.

Approval

This standard is established and maintained by the Office of Information Technology under authority delegated through the Technology Acceptable Use Policy, UCO Information Security Policy and the UCO Data Classification Guide, Information Communications Technology Accessibility Policy, Oklahoma Records Disposition Schedule for Colleges and Universities. Regular reviews ensure continued alignment with institutional objectives and evolving technology landscapes.

Change Log

Version

Date

New

Original

2.0

7/30/2025

Final Edits to the second draft completed - MG

 

Approvals

 Approved By

Date

 Description

Sonya Watkins, UCO Chief Information Officer,

8/6/2025

Initial Standard Release