Research policy for Generative AI
Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out? (Babbage 1864)
Large language models (LLMs) are fundamentally different from search engines, functioning more as ‘vibe-machines’ than information retrieval systems. (Ballsun-Stanton and Hipólito 2024)
How do we thread the needle?
This keynote is supposed to be the “positive” keynote.
A technology is:
https://policies.mq.edu.au/download.php?associated=1&id=768&version=1
Re: LLMs represent “a triumph for the humanities” - can the humanities as they are now be said to triumph, or do we need to include much more knowledge about LLMs into humanities curricula in order to increase understanding of their ‘grammars’ before they can “triumph”? How would you address such changes within the curriculum?
What kind of knowledge system can we create around, how can we operate with the concept of “knowledge” around tools that are so powerful and so little understood? As you suggested with the term “grimoire”, for many people this will remain a little bit like “magic” - what does that do to our present understanding of knowledge and a knowable, scientifically understandable world that we are competent actors in?
Should – and how could – universities address power imbalances / knowledge asymmetries within LLMs? E. g. the majority of training data being in English / from US/Anglo perspectives, even if the service is being used in a different language?
Do you agree that the main role of universities is to teach students how to think critically? In this regard, how should AI technologies be incorporated into the study process so that they help rather than create obstacles when it comes to achieving this objective?
How will AI generation impact human ability to produce new knowledge, particularly in social sciences?
You warned not to use LLMs for value judgements or ethics questions – but you have also used it to help with e. g. preparing ethics reviews. How do you resolve this?
Data privacy implications - what parts of research can these tools be used for without SERIOUS data issues? Are there any models/terms of service that can be safely used within the research process? * Examples: if I want to use them to line edit an unpublished paper that includes research data from a collaborative project, do i need to ask everyone on the project for permission? * Which models will not store and henceforth own that unpublished data that I have given them?

© 2024, License: CC-BY • https://github.com/Denubis/germany-keynote-ai-policy-briefing