Search within:

Secure Use of Artificial Intelligence (AI) Tools

Purpose

The emergence of AI tools including but not limited to ChatGPT, Gemini, CoPilot, and Dall-E  have proven to be useful in their generation of written and visual content, and as such requires clear standards of use. This standard provides guidance on how to use these tools safely without putting institutional, personal, or proprietary information at risk. 

Scope

This standard applies to all users who process, store, or transmit university data. While these tools serve a purpose and can be helpful in the course of the educational experience and our work, the accidental or deliberate introduction of sensitive data could result in organizational, legal, or even regulatory risks to the university. For example, Ohio University (OHIO) has no recourse for holding externally hosted AI tools accountable for data storage or use and these tools may be hosted outside of Ohio’s legal jurisdiction. Unlike vendors who have undergone vetting before implementation in support of OHIO business needs, the use of publicly available AI tools prevents our administrators from enforcing standard data governance, risk management, and compliance requirements. 

Standard

Allowable Use:

  • Publicly available or low impact information as outlined in Ohio University’s Data Classification Policy may be used freely within AI tools.
  • At present, OHIO’s paid version of Microsoft 365 entitles OHIO users to utilize Microsoft Copilot at no additional charge. This isn't used to train the underlying large language models.  Copilot can be accessed by going to https://copilot.microsoft.com/ and then clicking on the sign-in button on the top right corner of the screen. Upon logging in with your OHIO credentials you should see a green shield indicating protected in the top right corner of the screen. 

Prohibited Use:

  • At present, any use of AI tools must be with the assumption that no personal, confidential, proprietary, or otherwise sensitive information may be used. In general, sensitive information includes but is not limited to the following: 
    • Student records subject to FERPA
    • Data containing restrictions around disclosure as imposed by law, regulation, or policy. Such data includes, but is not limited to, data subject to HIPAA, PCI-DSS, GLBA, GDPR, Export Controls, and  identifiable human subject research. 
    • Any other information classified as medium or high impact data as outlined in Ohio University’s Data Classification Policy shall not be used.
  • Similarly, AI Tools must not be used to generate output that would be considered non-public. Examples include, but are not limited to, proprietary or unpublished research; legal analysis or advice; recruitment, personnel or disciplinary decision making; completion of academic work in a manner not allowed by the instructor; creation of non-public instructional materials; and grading.  
  • Please also note that OpenAI explicitly forbids the use of ChatGPT and their other products for certain categories of activity, including fraud and illegal activities. This list of items can be found in OpenAI’s usage policy document.

Definitions

Sensitive data: term used to describe the classification of data at a medium or high level that must be protected against unauthorized disclosure. Additional information can be found via University Policy 93.001 Data Classification and by visiting the Information Security Website. 

References

  1. NIST 800 Series Publications
  2. NIST AI Risk Management Framework
  3. Policy 91.003 Data Classification
  4. Policy 91.005 Information Security
  5. Third-Party Vendor Risk Management Standard
  6. Center for Teaching, Learning, & Assessment Generative AI Resources
  7. OpenAI Usage Policy

Exceptions

All exceptions to this standard must be formally documented with the Ohio University Information Security Office (ISO) prior to approval by the Information Security Governance Committee (ISGC). Standard exceptions will be reviewed and renewed on a periodic basis by the ISO.

Request an exception: 
Complete Exception Request Form 

Governance

This standard will be reviewed and approved by the university Information Security Governance Committee as deemed appropriate based on fluctuations in the technology landscape, and/or changes to established regulatory requirement mandates.  

Reviewers

The reviewers of this standard are the members of the Information Security Governance Committee representing the following University stakeholder groups: 

  • Information Technology: Ed Carter (Chair) 

  • Human Resources: Michael Courtney 

  • Faculty: Hans Kruse 

  • Faculty: Brian McCarthy 

  • Finance and Administration: Julie Allison 

  • Associate Dean: Shawn Ostermann

  • Regional Higher Education: Larry Tumblin

  • Research and Sponsored Programs: Sue Robb

  • Enterprise Risk Management and Insurance: Larry Wines 

History

Draft versions of this policy were circulated for review and approved November 2, 2023.

Non-substantive changes, but additional clarifying language added on April 24, 2024.