Artificial Intelligence (AI):
AI refers to the development of computer systems that can perform tasks that would normally require human intelligence.
ML is a part of AI which involves using computer algorithms to recognize patterns in data and make predictions based on those patterns.
Natural Language Processing:
NLP is another branch of AI that focuses on enabling computers to understand and imitate human language.
Plagiarism is the act of presenting someone else's words, ideas, or work as one's own, without proper attribution or permission. This can be in the form of directly copying or paraphrasing without proper citations.
Bias refers to a prejudice for or against a group of people. In AI, this can be seen in distortions of the data, or the algorithm itself and may reinforce existing social/cultural/economic stereotypes.
The singularity is a hypothetical point in time when AI surpasses human intelligence, leading to growth which is uncontrollable or irreversible. This is the point-of-no-return for a Matrix-esque a future ruled by AI.
What is Gen-AI?
Generative AI models, such as ChatGPT, are natural language tools which have garnered unprecedented global attention since their release in late 2022. They are designed to understand and imitate human-like language, and are trained on HUGE data sets, with enormous amounts of information from books, social media, articles and websites, as well as insights into the way which humans communicate. This means you can customise its vernacular to any parameters you choose; from a 6 year old child to a pirate!
Gen-AI can be incredibly helpful. For example; it can provide general overviews of any topic, summarise longer articles, provide feedback on grammar, translate texts, and create personalised study assistance to assist those with differing learning needs.
While it can be very helpful, it also has its pitfalls - especially in an academic context. As these tools only generate languaged responses - it doesn't actually know what it is saying, and therefore can't tell if it is true. It can tell you something that 'sounds right' based on its data set, however 'sounds right' and 'right' are not the same thing. This is because Gen-AI tools are language prediction models and an not all-seeing all-knowing AI.
In an academic context this has some serious implications. Many Gen-AI's are unable to cite their sources, and therefore its information is not verified. It contains inaccuracies and biases, especially in controversial or sensitive topics (remember, social media posts are a part of its data set, and probably Reddit threads too! This has not turned out well in the past.). It does not create original thoughts, and therefore critical thinking is not one of its functions. It also has serious implications for Academic Integrity standards and could cause issues around Academic Misconduct.
These models can be incredibly helpful, however - as with any unverified sources - make sure you triple check any information and take everything it tells you with a grain of salt.