Skip to Main Content

Generative AI and Large Language Models (LLMs)

Understanding LLMs (e.g., ChatGPT) and the concept, use, and ethics of Generative AI tools and platforms.

Generative AI and Academic Integrity

Using Generative AI tools is not cheating if a faculty member approves its use in a class assignment. 

However, if these tools have not been approved, students run the risk of violating NYU's Academic Integrity Policy as it defines cheating as:

"Deceiving a faculty member or other individual who assess student performance into believing that one’s mastery of a subject or discipline is greater than it is by a range of dishonest methods, including but not limited to:

  • bringing or accessing unauthorized materials during an examination (e.g., notes, books, or other information accessed via cell phones, computers, other technology or any other means)
  • submitting work (papers, homework assignments, computer programs, experimental results, artwork, etc.) that was created by another, substantially or in whole, as one's own
  • submitting answers on an exam that were obtained from the work of another person or providing answers or assistance to others during an exam when not explicitly permitted by the instructor
  • altering or forging academic documents, including but not limited to admissions materials, academic records, grade reports, add/drop forms, course registration forms, etc."

Academic integrity can also be violated by participating in "any behavior that violates the academic policies set forth by the student’s NYU School, department, or division."

Thus, it is important for students and faculty to clarify the extent of the use of AI tools in specific classes and assignments.

AI Detection and TurnItIn


AI Writing Detection has been disabled in the Turnitin Similarity Reports. For more information, see the NYU Knowledge Base articles below:

It should be noted that AI writing detection tools are unreliable and their use is cautioned.

Evaluating AI-Generated Text

It is unclear exactly how Generative AI tools work. This uncertainty raises some questions that may help evaluate the accuracy, reliability, relevance, and authority of the text that AI tools produce. 

  • Where does the AI tool get its information from?
  • Can you identify the authors of the works the AI tool is citing or pulling paragraphs from? 
  • Who or what materials are not cited?
  • Do the citations listed exist? Are they accurate?
  • Is the AI tool's output paraphrasing or using entire sections of text that belong to someone else?
  • Can the information provided by the AI tool be verified? 
  • Has the information the AI tool used been peer-reviewed?

Finally, students may want to consider whether there is more value to using information from its original source versus what Generative AI tools generate.

Citing AI-Generated Text

In most cases, AI writing tools should not be used as an academic source of information. If used, it is always best to cite the original sources the tool lists as its citations, especially because AI tools often generate false citations (also known as "hallucinations"). 

However, if Generative AI is permitted for use in an assignment, instructors may want it cited when appropriate.

The main three citation styles, APA, MLA, and CMS all consider AI-generated text as "personal communication." This means the text generated by AI tools often cannot be verified, replicated, retrieved, nor recovered by anyone other the original author at the time of its generation. Even persistent URLs generated by AI tools can often only be accessed by the author. It is suggested that authors copy or save their entire prompt history and full generated responses for reference, formal acknowledgement (e.g., an appendix).

Scribbr, a proofreading/citation checking site, offers some guidance for each style.

For additional citation assistance, please see the Libraries' Citation Guide