SISA
CSPAI
Q1:
In transformer models, how does the attention mechanism improve model performance compared to RNNs?
○
A
By enabling the model to attend to both nearby and distant words simultaneously, improving its understanding of long-term dependencies○
B
By processing each input independently, ensuring the model captures all aspects of the sequence equally.○
C
By enhancing the model's ability to process data in parallel, ensuring faster training without compromising context.○
D
By dynamically assigning importance to every word in the sequence, enabling the model to focus on relevant parts of the input.
SISA
CSPAI
Q2:
What is a potential risk of LLM plugin compromise?
○
A
Better integration with third-party tools○
B
Improved model accuracy○
C
Unauthorized access to sensitive information through compromised plugins○
D
Reduced model training time
SISA
CSPAI
Q3:
In the Retrieval-Augmented Generation (RAG) framework, which of the following is the most critical factor for improving factual consistency in generated outputs?
○
A
Fine-tuning the generative model with synthetic datasets generated from the retrieved documents○
B
Utilising an ensemble of multiple LLMs to cross-check the generated outputs.○
C
Implementing a redundancy check by comparing the outputs from different retrieval modules.○
D
Tuning the retrieval model to prioritize documents with the highest semantic similarity
SISA
CSPAI
Q4:
Which of the following is a primary goal of enforcing Responsible AI standards and regulations in the development and deployment of LLMs?
○
A
Maximizing model performance while minimizing computational costs.○
B
Developing AI systems with the highest accuracy regardless of data privacy concerns○
C
Focusing solely on improving the speed and scalability of AI systems○
D
Ensuring that AI systems operate safely, ethically, and without causing harm.
SISA
CSPAI
Q5:
In a Transformer model processing a sequence of text for a translation task, how does incorporating positional encoding impact the model's ability to generate accurate translations?
○
A
It ensures that the model treats all words as equally important, regardless of their position in the sequence.○
B
It simplifies the model's computations by merging all words into a single representation, regardless of their order○
C
It speeds up processing by reducing the number of tokens the model needs to handle.○
D
It helps the model distinguish the order of words in the sentence, leading to more accurate translation by maintaining the context of each word's position.