Skip to Main Content

AI (Artificial Intelligence): Institutional Principles and Policy Frameworks

Responsible and Effective Use of Artificial Intelligence in Academic Research and Writing

How This Connects to Resonsibile and Ethical Use page

This section complements the Responsible and Ethical Use page by providing the institutional context and policy frameworks that support those practical recommendations.

 It includes:

  • Guiding principles (transparency, accountability, fairness, etc.)
  • Alignment with UNISA’s Draft AI Policy and Academic Integrity Policy
  • Library-specific AI use cases (e.g., information literacy, research support)
  • Unacceptable uses and best practices checklist
  • Copyright and legal considerations in the South African context

Here's how the two sections align:

Theme

Student Guidelines 

practical advice

Institutional Framework

policy and ethical foundation

Academic Integrity Cite AI tools, be transparent, avoid plagiarism UNISA Academic Integrity Policy, ethical conduct, disciplinary measu
Transparency Disclose AI use in submissions Guiding principle: transparency in AI decision-making
Privacy Don’t input sensitive data into AI tools POPIA compliance and institutional data policies
Fair Use Use AI as a helper, not a replacement Human oversight and fairness as guiding principles
Verification Check facts and watch for bias Promote critical evaluation and interpretative oversight
Copyright Use AI-generated content responsibly Copyright law guidance and ownership considerations
Library Support Use AI for writing help and literature reviews Library services for AI in research, writing, and discovery

 

Institutional Frameworks for Ethical AI Use at UNISA

The Unisa institutional frameworks describes the foundational principles, frameworks, and policies dedicated to the ethical and effective deployment of Artificial Intelligence (AI) tools in enhancing teaching, learning, research, and library services.

It reinforces the Library's dedication to the responsible integration of AI and is consistent with Unisa's strategic focal points: the Fourth Industrial Revolution (4IR), student support, digitalisation, and lifelong learning.
By establishing guidelines for AI use, it ensures adherence to ethical standards and aligns with the university’s institutional values, focusing on improving educational results.

  • Promote ethical and responsible use of AI tools in the academic environment.
  • Support students and staff in AI literacy and digital scholarship.
  • Ensure AI integration aligns with academic integrity and institutional values.
  • Provide guidance for library staff on AI tools, services, and instruction.
  1. Transparency: Users should understand how AI tools work and how decisions are made.
  2. Accountability: Human oversight remains essential; AI is a tool, not a decision-maker.
  3. Privacy: Data used in AI tools should comply with POPIA and institutional policies.
  4. Fairness: Ensure AI does not perpetuate bias or exclusion.
  5. Academic Integrity: AI should not be used to plagiarize, fabricate, or misrepresent academic work.
UNISA AI Policy Alignment
 

Definition and Purpose

Academic integrity at Unisa is defined as a commitment to honesty, trust, fairness, respect, responsibility, and truthfulness in teaching, research, and community engagement.
• The policy supports Unisa's vision of being an African university shaping futures in the service of humanity.

Core Values
Quality: Teaching and research must meet national and international standards.
Good Practice: Includes proper referencing, data handling, and discipline-specific ethical standards.
Ethical Conduct: Applies to all academic activities and is meant to be educational rather than punitive.

Disciplinary Measures
Academic dishonesty (e.g., plagiarism, fraud) is addressed through disciplinary procedures.
Students and staff are subject to different processes: the Registrar handles student cases, while Human Resources manages staff issues.

You can read the full policy document here  

Access the Academic Integrity course.

 

Information Literacy

AI tools to recommend sources or refine topics.

Promote critical evaluation of AI-generated information.

Academic Writing

Use of grammar, citation, or paraphrasing tools.

Educate on boundaries of acceptable tool usage.

Research Support

AI for data analysis, summarisation, synthesis.

Ensure data quality and interpretative oversight.

Search and Discovery

Chatbots or semantic search engines.

Combine AI with traditional search strategies.

Reference Management

AI-generated citations (e.g., ChatGPT AI).

Verify accuracy and formatting.

 

Unacceptable Uses include, but are not limited to: 

  • Using AI tools to complete assessments without permission.
  • Submitting AI-generated content as original thought.
  • Relying on AI for factual accuracy without verification.
  • Inputting sensitive or private data into external AI tools.

Before using AI in your work, ask yourself the following questions:

 

  1. Have I evaluated the accuracy of AI-generated content?
  2. Have I cited the tool if it influenced my work?
  3. Did I use AI to support—not replace—my own academic thinking?
  4. Have I checked that the tool is compliant with data privacy rules?

For more guidance and information on evaluating AI-generated content, visit the Evaluating AI Outputs section of this guide.