Safe use of IT
- Make sure that your home computer, mobile phone and tablet are updated and have good security software. You can download free anti-virus software, for example MS Security Essentials and Avira Anti Virus, or purchase affordable anti-virus software via SURFspot.
- Use a trusted and secure (wifi) network.
- Be alert for phishing and fake emails. If you receive an unexpected email or an email from an unfamiliar sender, do not click on any links, open any attachments or enter any details. The university will never ask you for your passwords by email. If you think you have received a phishing email, report it in the helpdesk portal.
- Use OneDrive to save and share files safely.
- Close the browser completely after having been logged in. Do not save your passwords in your browser.
GAI use in education @FSW
Student guidelines
This page provides general information for students about GAI and guidelines for responsible use. Your lecturer will decide whether use of GAI is permitted. This is decided per course, and the specific directions on how and when to use GAI are communicated through Brightspace.
1. What is GAI?
Generative Artificial Intelligence (GAI) is a form of machine learning. It is the collective term for algorithms capable of creating new content. This content can take different forms: text or code, as well as images, videos and audio or a combination of all of these.
GAI generates output in response to a query/prompt using generative models such as Large Language Models (LLMs) and relies on large data sets to do so. Some well-known examples are text generators such as ChatGPT, CoPilot (Microsoft), Gemini (Google), and image generators such as DALL-E and Midjourney.1
1. Adapted from the KU Leuven, see: Responsible use of Generative Artificial Intelligence - Student (kuleuven.be)
2. What uses of GAI are not permitted under any circumstances?
Never permitted:
-
Any form of literal copying and reproduction without full source citation (quoting, referencing) of material generated by GAI.
-
Any use of GAI during any form of assessment unless indicated that the use of GAI is permitted.
See your program's Rules & Regulations for more explanation of plagiarism fraud and what the consequences may be if you are guilty of it as a student.
3. Citing GenAI in APA Style
In those cases where the use of generative AI software is allowed, you should refer to it just like any other software you use. At the FSW, we use APA as the default reference style. Other possible reference styles include Chicago, MLA, Harvard and Vancouver.
An APA-style reference for GAI includes the following elements:
Author. (Date). Title (Version) [Description]. Publisher. URL
Examples of citing GenAI
Tool |
Reference |
In-text |
ChatGPT |
OpenAI. (2023). ChatGPT (September 25 version) [Large Language Model]. https://chat.openai.com/NOTE: Because the author and publisher are identical, in this case omit the publisher. |
(OpenAI, 2023)
|
Bard |
Google AI. (2023). Bard (October 23 versie) [Large language model]. Google. https://bard.google.com/chat |
(Google AI, 2023)
|
DALL · E |
OpenAI. (2023). DALL · E 2 [text-to-image model]. https://labs.openai.com/ |
(OpenAI, 2023)
|
Also include the prompt used to generate a particular text in your paper.
Important
Every time you re-use a prompt, you’ll get a different result. For that reason, ChatGPT and other GAI systems are an unreliable sources of information, as the reader can’t look up which generated text you used.
4. Guidelines for responsible use
KU Leuven's Tips and tricks for responsible use of GAI was used as a basis for the following guidelines.
There is little transparency from AI developers on what is done with personal data entered into the tool. The data is also often stored in cloud applications that are not always GDPR compliant, and storage is often outside the EU. This raises issues for both personal data and new findings in scientific research. Feeding GAI any of this data could be equivalent to disclosure, which may cause a data breach and/or prevent the filing of a patent for that discovery. For that reason, we ask you not to enter confidential information, such as personal or research data, into GAI applications. Cases used in lectures and seminars are also considered sensitive information and aren’t allowed to be entered into GAI applications.
If possible, request that the GAI tool does not train its algorithm with your data. You can request ChatGPT to not use your entered data for training the model via 'Make a privacy request'. Note that this does not relieve you from the previous guideline not to enter privacy-sensitive, IP-protected or copyrighted material into GAI.
GAI tools lack transparency about the sources used when generating output, increasing the likelihood of plagiarism. The risk of copyright infringement is an additional risk. The databases used by GAI tools contain source material that leaves unclear whether original authors gave consent for usage of that material, or whether any copyright was respected. Be sure to paraphrase and reference properly (see above section on Citing GAI APA-style) and apply thorough source research (see Verify GAI below). Do not enter any source material written by a third party, such as an article, email, information from lectures and seminars, text from lecture slides, or example cases, unless given explicit permission by the author.
Use GAI if permitted, but it is important to remain vigilant. It is not always possible to find out how the algorithms arrived at a particular conclusion and there is no transparency about what sources were used. GAI’s purpose is to generate text that seems as plausible as possible, and veracity is not verified. In some cases, the LLM may fabricate answers entirely, this is referred to as the GAI ‘hallucinating’. This means there is no guarantee that the system's output is actually correct. Verify results and answers, and look for existing source material to cross-reference any claims made by GAI. You are considered responsible for any incorrect information in your work that may have originally come from GAI.
GAI tools are often trained with large quantities of unknown data , which may be outdated or no longer representative of current standards. Developers or publishers give very little transparency about how or if a GAI has any inherent biases, or options to mitigate that bias. This means any answers given by GAI may perpetuate stereotypes or biases.
Using GAI effectively can sometimes be a skill. Asking an overly simple question can produce an overly general or vague GAI output. Use ‘prompt engineering’ to increase the specificity and usability of the output. Avoid leading or loaded questions, or attempting to push the GAI towards a specific conclusion.
Know that the energy cost of servers for GAI tools is very high, so use them only when they can add value.
GAI applications can be a useful tool, but we expect students to be critical of the output at all times. The actual meaning is given by your own reasoning, critical analysis, creativity, and reflection.
Sources
- https://npuls.nl/en/news/npuls-introduces-the-magazine-smarter-education-with-ai/
- https://www.iesalc.unesco.org/wp-content/uploads/2023/04/ChatGPT-and-Artificial-Intelligence-in-higher-education-Quick-Start-guide_EN_FINAL.pdf
- https://www.kuleuven.be/onderwijs/student/onderwijstools/artificiele-intelligentie
Version 5-11-2024