Q1 – Match List-1 and List-2 according to input to the compiler phase that process it
List – 1 | List – 2 |
(a)Syntax tree | I. Code Generator |
(b)Intermediate Representation | II. Semantic Analyzer |
(c)Token Stream | III. Lexical Analyze |
(d)Character Stream | IV. Syntax Analyzer |
Choose the correct answer from the option given below:
(a)-IV, (b)-III, (c)-I, (d)-II
(a)-II, (b)-I, (c)-IV, (d)-III
(a)-II, (b)-IV, (c)-I, (d)-III
(a)-IV, (b)-I, (c)-II, (d)-III
(UGC NET DEC 2023)
Ans – (2)
Explanation – The compiler helps in the various stages of translating human-readable code into machine-executable code.
In the first stage, or (III) Lexical Analyzer takes a (d) character stream (raw source code) and translates it into generating tokens, which are units that have specific functions such as keywords, operators and identifiers.
The (IV) Syntax Analyzer analyses this (c) token stream and gives rise to the syntax tree while satisfying grammatical conditions.
Finally, the (II) Semantic Analyzer checks the (a) syntax tree for the presence of logical errors, like type mismatches or undeclared variables.
Once the syntax and semantics have been tested, an intermediate version of the code is generated by a program called the Intermediate Code Generator, this makes it easier for the compiler to optimize. The (I) Code Generator transforms the (b) intermediate representation into the final machine code, ensuring that the program is ready for execution.
Q2 – One of the purposes of using intermediate code in compilers is to
Make parsing and semantic analysis simpler
Improve error recovery and error reporting
Increase the chances of reusing the machine independent code optimizer in other compilers
Improve the register allocation
(UGC NET DEC 2023)
Ans – (3)
Explanation – Intermediate code in compilers is used primarily with the intention of making the compilation more flexible and efficient. What it does is instead of translating high-level codes directly to machine code, the compiler first translates it into an intermediate representation, independent of the target machine.
This helps achieve portability by being able to use the same intermediate code for other target machines. Machine independent optimizations can be applied to improve performance.
The intermediate code optimizer can be reused in several compilers, reducing development effort.
Q3 – Which of the following are commonly used parsing techniques in NLP (Natural Language Processing) for the syntactical analysis.
(a) Top-down parsing
(b) Bottom-up parsing
(c) Dependency parsing
(d) Statistical machine translation
(e) Earley parsing
Choose the correct answer from the options given below
(a), (c), (d), (e) only
(b), (c), (d), (e) only
(a), (b), (c), (e) only
(a) and (b) only
(UGC NET DEC 2023)
Ans – (3)
Explanation – Parsing in Natural Language Processing (NLP) is a process, focusing and analyzing on the grammar structure of a sentence.
Top-down parsing – Like the name already indicates, it starts at the top. The start symbol is first broken down into less complex parts through the help of certain grammar rules.
Example is Recursive Descent Parsing.
Bottom-up parsing – This parsing method starts from the input tokens and builds up the parse tree using some grammar rules.
Example is Shift-Reduce Parsing.
Dependency Parsing – Focuses on relationships between words in a sentence (subject, object, and so much more). It is used for syntactical analysis.
Statistical Machine Translation – It is not a parsing technique and it is helpful in translation tasks.
Earley Parsing – A parsing technique based on dynamic programming useful for ambiguous grammars.
Q4 – Arrange the following phases of a compiler as per their order of execution (start to end)
(a) Target code generation
(b) Syntax Analysis
(c) Code optimization
(d) Semantic Analysis
(e) Lexical Analysis
Choose the correct answer from the options given below
(b), (e), (d), (a), (c)
(e), (d), (b), (a), (c)
(e), (b), (d), (c), (a)
(b), (d), (e), (a), (c)
(UGC NET DEC 2023)
Ans – (3)
Explanation – Phases of a Compiler (In Correct Order of Execution)
A compiler translates source code into machine code through many phases. The following is the correct order of execution of these phases
(e) Lexical Analysis – This is the first phase; the source code is broken into tokens.
(b) Syntax Analysis (Parsing) – This is done to see whether the tokens conform to certain grammar rules.
(d) Semantic analysis – This is done, checking if statements make sense logically.
(c) Code Optimization – The intermediate code is optimized for better performance.
(a) Target Code Generation – Is producing the final machine code.
Q5 – Consider a grammar E -> E + n | E x n | n for a sentence n + n x n, the handles in the right sentential form of the reduction are?
n, E + n and E + n x n
n, E + n and E + E x n
n, n + n and n + n x n
n, E + n and E x n
(UGC NET DEC 2023)
Ans – (4)
Explanation – A handle is that smallest part of an expression that can be replaced by a rule from the grammar.
A handle is that minimal unit being changed form from the equation while using given operation rules.
We are given 3 handles, E -> E + n, E -> E x n, E -> n.
First, let’s identify the smallest handle.
From E → n, n is replaced with E. So, handles in this step = n
Apply the production rule E → E × n.
n + n × n becomes n + E (by replacing n × n with E × n). So, handle here is E × n.
Now, Application of production rule E → E + n.
n + E converts into E. So, handle here is equal to E + n.
So, option 4 is the answer.
Q6 – Which of the following symbol table implementation is best suited if access time is to be minimum?
Linear list
Search tree
Hash Table
Self-organisation list
(UGC NET DEC 2023)
Ans – (3)
Explanation – The symbol table that offers minimal access time is termed as the Hash Table.
Basically, A hash table provides O(1) average time complexity for searching, insertion, and deletion operations which is achieved with a hash function that distributes keys uniformly. Thus, if we use a hash table to conduct search operations, it, on average, takes constant time. The quickest among all the data structures.
Other options have higher access times –
The linear list needs to take O(n) time just to sequentially scan it.
Balanced binary search trees take O(log n).
Self-organizing lists frequently accessed elements are kept near the front, though in the worst case they still take O(n) access time.