Proposed AI Guidelines

I. Purpose

Artificial Intelligence offers opportunities to develop Villanovans in both personal and professional ways. As we prepare students for their future role in society, Villanova has an obligation to equip students and graduates, as well as employees and researchers, with the tools they need to navigate in a world with emerging and rapidly changing technologies.

These Guidelines aim to foster an ethical and responsible approach to AI, balancing innovation with the core principles of integrity, accountability, confidentiality, security, and privacy.

II. Scope

These Guidelines apply to the use of AI tools by all members of the Villanova University community including students, faculty, staff, and vendors. This document is not a final policy but will be updated as AI technology, regulations, and industry practices evolve. It is intended to complement existing policies on ethics, privacy, security, and compliance.

III. Types of Artificial Intelligence

  • AI System: Any system, software, hardware, application, tool, or utility that operates in whole or in part using AI.
  • Generative AI: Can generate outputs like text or images based on the input prompt. This includes any machine-based tool designed to consider user questions, prompts, and other inputs (for example text, images, videos) to generate a human-like output (for example a response to a question, a written document, software code, or a product design). In short, Generative AI refers to deep-learning models that can generate outputs based on data that they are trained on. Generative AI includes both standalone offerings such as ChatGPT, and offerings that are embedded in other software, such as Github Copilot.

IV. Guiding Principles for Responsible AI Use

1. Confidentiality, Security, and Privacy

AI tools should be used within the Acceptable Use and Data Classifications policies. You should not enter data that is classified as Restricted or Private, including non-public student data and information subject to federal or state laws or regulations, including for example student education records under the Family Educational Rights and Privacy Act (FERPA) and patient data under the Health Insurance Portability and Accountability Act (HIPAA).

Generative AI tools can store data from user interactions to improve their systems, raising concerns about sensitive information like student records and proprietary research or institutional information. Any information provided to third-party, public generative AI tools is considered public and may be stored and used by the third party. For example, using a personal ChatGPT account is considered a public AI tool.

Sensitive data should not be entered into generative AI tools without the tool having been assessed and approved by the Office of Information Security and a contract in place that has been reviewed by the Office of General Counsel.

a. Bias

When using AI, keep in mind that these tools are often trained on large, unmoderated bodies of text, such as text posted to the internet. This can result in the production of biased and other unintended content. The ability to avoid such biased content is still in the early stages of development. For example, certain academic integrity detection tools are known to have biases by incorrectly flagging writing by students for whom English is not their first-learned language.

b. Transparency

Be transparent about the use of AI. Disclose when a work product was created wholly or partially using an AI tool and, if appropriate, how AI was used to create the work product. Non-disclosure could be considered misrepresentation or plagiarism if the AI generated content is presented as human work.

c. Accountability

All students, faculty, and staff are responsible for any content published, shared, or otherwise developed that includes any AI-generated material. AI-generated content may contain copyrighted material, and can be inaccurate, false, biased, misleading, fabricated (hallucinations), and outdated. Always verify the information and exercise caution.

d. Data Scraping

The rise of AI models has led to a significant increase in individuals and organizations scraping (copying) information posted on the internet for the purpose of training new AI models. Be aware that any data posted publicly will likely be scraped and used in this way by third parties. Similarly, while these practices are common, their legality and the potential consequences of these actions are currently being developed but remain unresolved at the time this guidance was issued.

6. Equity and Accessibility

Villanova University is committed to ensuring that AI tools are accessible and equitable for all members of the community. AI tools should be used in ways that do not create or reinforce systemic disadvantages for individuals based on race, gender, disability, socioeconomic status, or other protected characteristics.

When using AI in academic or administrative settings, consider the needs of users with disabilities and ensure compliance with accessibility standards, such as the Americans with Disabilities Act (ADA) and the Web Content Accessibility Guidelines (WCAG 2.1). Faculty and staff should strive to provide alternative methods of engagement for students and employees who may have limited access to AI technologies.

7. Intellectual Property

AI-generated content may raise questions about ownership, authorship, and copyright. Users must be mindful that AI models are trained on large datasets, which may include copyrighted material. Faculty, students, and staff should assume that AI-generated text, images, and code may not be eligible for copyright protection or may inadvertently infringe on existing copyrights.

Any AI-assisted content should be properly attributed, and users should consult the University’s Intellectual Property Policy and Copyright Infringement and Illegal File Sharing Policy before incorporating AI-generated materials into their work. Additionally, faculty and researchers should be aware that some publishers and funding agencies may have specific guidelines regarding AI-generated content.

Reputational Considerations

8. Critical Thinking and Skill Development

While AI tools can enhance productivity and creativity, they should not replace critical thinking and foundational learning. Students, faculty, and staff should approach AI-generated outputs with a discerning mindset, verifying the accuracy and reliability of AI-generated information before use. Overreliance on AI tools for tasks such as writing, coding, or research analysis may impede skill development and independent thought.

The University encourages a balanced approach where AI is used as a tool rather than a substitute for human cognition and problem-solving.

9. Safety

The use of AI tools should prioritize safety and well-being. AI-generated content, including deepfakes and synthetic media, can be used maliciously to deceive, manipulate, or harm individuals. Faculty, students, and staff should be cautious when engaging with AI-generated content and should report any suspected misuse to the Office of Information Security.

Additionally, AI-driven automation in decision-making should be carefully evaluated to ensure that it does not pose risks to individuals or communities. Any AI applications used in research, healthcare, or administrative decision-making should undergo ethical review and risk assessment.

Deepfakes: Rights in Likeness and Voice

AI-generated deepfakes can create realistic but false representations of individuals, including their likeness and voice. The unauthorized creation or use of deepfake content can violate privacy rights, intellectual property laws, and ethical standards. Community members should not create or distribute deepfake content without explicit consent from the individuals depicted.

Any AI-generated media used for research, education, or creative projects should be clearly labeled as synthetic content to prevent misinformation or deception.

10. AI Notetaking Tools

AI notetaking tools have become increasingly popular. If a student, faculty member, employee, or researcher intends to use such a tool and it records spoken conversations, they must provide notice, such as including it in a syllabus or announcing its use before recording. If an individual objects to being recorded, an alternative note-taking method should be explored and considered.

Additionally, users should be mindful of the potential inaccuracies in AI-generated notes and review AI-generated notes before widespread distribution. The guiding principles outlined above also apply to AI notetaking tools.

Additional Guidelines for Students, Educators, and Researchers

In the Classroom

Villanova University does not restrict the use of AI tools in the classroom. Faculty and instructors may choose to prohibit, to allow with attribution, or to encourage generative AI use. In any case, AI should be used consistent with The Code of Academic Integrity and Code of Student Conduct.

Faculty and instructors should be clear with students about their policies on permitted use, if any, of generative AI in class and on academic work. Students are expected to seek clarification from faculty as needed. Individual colleges, departments, or instructors may have specific guidance. Students should comply with the Student AI Guidelines as they use AI-generated material for their coursework.

Faculty members using AI tools in the classroom should include a statement within the course syllabus.

In the Workplace

Employees shall only use AI tools that meet organizational security standards, not input any personally identifiable or confidential information, and fact check AI-generated content before using it in official University documents as they remain accountable for any work products even if AI was used to assist in their creation.

They should disclose if AI was used to assist in the creation of official documents and formal reports, as nondisclosure could be considered misrepresentation or plagiarism if the AI-generated content is mistaken for human work.

In Research

Researchers are responsible for the accuracy of any content created by Generative AI that is included in any research content, as Generative AI has been found to generate citations to papers that do not exist.

Further, researchers should avoid inputting unpublished research work into an AI tool as the unpublished work may lose intellectual property protections or may give rise to privacy violations if personally identifiable information is included. Finally, researchers should avoid inputting other parties confidential information, as this could create the potential to breach confidential contractual commitments.

All University research is subject to the University’s research integrity policies that can be found here.