NVIDIA
NCA-GENL
Q1:
Which of the following principles are widely recognized for building trustworthy AI? (Choose two.)
☐
A
Conversational☐
B
Low latency☐
C
Privacy☐
D
Scalability☐
E
Nondiscrimination
NVIDIA
NCA-GENL
Q2:
Which of the following claims is correct about quantization in the context of Deep Learning? (Pick the 2 correct responses)
☐
A
Quantization might help in saving power and reducing heat production.☐
B
It consists of removing a quantity of weights whose values are zero.☐
C
It leads to a substantial loss of model accuracy.☐
D
Helps reduce memory requirements and achieve better cache utilization.☐
E
It only involves reducing the number of bits of the parameters.
NVIDIA
NCA-GENL
Q3:
In transformer-based LLMs, how does the use of multi-head attention improve model performance compared to single-head attention, particularly for complex NLP tasks?
○
A
Multi-head attention reduces the model's memory footprint by sharing weights across heads.○
B
Multi-head attention allows the model to focus on multiple aspects of the input sequence simultaneously.○
C
Multi-head attention eliminates the need for positional encodings in the input sequence.○
D
Multi-head attention simplifies the training process by reducing the number of parameters.
NVIDIA
NCA-GENL
Q4:
In Exploratory Data Analysis (EDA) for Natural Language Understanding (NLU), which method is essential for understanding the contextual relationship between words in textual data?
○
A
Computing the frequency of individual words to identify the most common terms in a text.○
B
Applying sentiment analysis to gauge the overall sentiment expressed in a text.○
C
Generating word clouds to visually represent word frequency and highlight key terms.○
D
Creating n-gram models to analyze patterns of word sequences like bigrams and trigrams.
NVIDIA
NCA-GENL
Q5:
What type of model would you use in emotion classification tasks?
○
A
Auto-encoder model○
B
Siamese model○
C
Encoder model○
D
SVM model