Monday , July 15 2024
Breaking News

CS301: Computer Architecture Certification Exam Answers


Computer architecture refers to the design and organization of the components that make up a computer system, including the CPU (Central Processing Unit), memory, input/output devices, and the interconnections between them. It encompasses both hardware and software aspects, focusing on how the hardware components are structured and how they interact to execute instructions and process data.

Key aspects of computer architecture include:

  1. Instruction Set Architecture (ISA): This defines the set of instructions that a CPU can execute and how those instructions are encoded. It serves as an interface between the hardware and software, allowing software developers to write programs that can run on different hardware platforms.
  2. Processor Design: This involves the design of the CPU, including its instruction execution pipeline, cache hierarchy, branch prediction, and other microarchitectural features. The goal is to maximize performance while minimizing power consumption and cost.
  3. Memory Hierarchy: This refers to the organization of memory in a computer system, including primary storage (RAM), secondary storage (such as hard drives or SSDs), and caches. The memory hierarchy is designed to optimize performance by minimizing the time it takes to access data.
  4. Input/Output (I/O) Systems: This includes the design of interfaces and controllers for connecting external devices to the computer, such as keyboards, mice, displays, and storage devices. The goal is to provide efficient and reliable communication between the CPU and peripherals.
  5. Parallelism and Concurrency: With the increasing demand for performance, modern computer architectures often incorporate parallelism and concurrency at various levels, including instruction-level parallelism within a single CPU core, multi-core processors, and distributed systems.
  6. System Architecture: This involves the organization and interconnection of multiple hardware components within a computer system, including buses, bridges, and interconnects. It also includes the design of system-level features such as interrupt handling, memory management, and bus protocols.

Computer architects must balance competing design goals such as performance, power efficiency, cost, and scalability to create systems that meet the needs of their intended applications and users. They often use techniques such as simulation, modeling, and performance analysis to evaluate design choices and identify bottlenecks.

CS301: Computer Architecture Exam Quiz Answers

  • More than one program in memory
  • More than one memory in the system
  • More than one processor in the system
  • More than two processors in the system
  • The ALU
  • Back to memory
  • The program counters
  • The instruction registers
  • CPU chip
  • Floppy disk
  • Hard disk
  • Memory chip
  • Apple’s iMacs
  • IBM’s Watson
  • Mobile devices
  • Supercomputers
  • Instruction Register
  • Memory Data Register
  • Memory Address Register
  • Program Counter Register
Computer Architecture 1
  • 00
  • 01
  • 10
  • 11
  • 00111110
  • 11000001
  • 11000010
  • 11100010
  • 01001010
  • 01001011
  • 01101010
  • 11001111
  • 00010000011001010000000000000101
  • 00010000011001010000000000001010
  • 00100000011001010000000000000101
  • 00100000011001010000000000001010
  • ST and LD
  • JR and BEQ
  • ADD and SUB
  • PUSH and POP
  • add
  • jr
  • ld
  • or
ab
00011110
cd00X11
01X
111
1011X
  • b’d’ + a’b
  • ab’ + a’d’
  • d’ + ab’
  • ac + a’bd’
  • There are three stages
  • There is a clock line going to each full adder
  • The adder is slower than a carry looks ahead adder
  • Extra gates are needed besides the full adder gates
Computer Architecture 2
  • [ab + a’b’] S’ + [a’b + ab’] S
  • [ab + a’b] S’ + [a’b’ + ab’] S
  • [a’b + a’b’] S’ + [ab + ab’] S
  • [ab’ + a’b] S’ + [ a’b’ + ab] S
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

Assume b > a

  • Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • 1 XOR, 1 AND, 2 OR
  • 1 XOR, 2 AND, 1 OR
  • 2 XOR, 2 AND, 1 OR
  • 2 XOR, 1 AND, 2 OR
  • PC
  • PC+4
  • 2*PC
  • 2*PC-1
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Control hazard
  • Static parallelism
  • Dynamic parallelism
  • Speculative execution
  • It is more expensive than other types of cache organizations
  • Its access time is greater than that of other cache organizations
  • Its cache hit ratio is typically worse than with other organizations
  • It does not allow simultaneous access to the intended data and its tag
  • 0
  • 1
  • 3
  • 5
  • A disk
  • A cache
  • The register files
  • The main memory
  • Disk
  • Cache
  • Page table
  • Virtual Memory
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • C/C++
  • MPI
  • OpenMP
  • Python
  • There is no improvement in performance as the number of processors increase
  • There is a diminishing improvement in performance as the number of processors increase
  • There is an increasing improvement in performance as the number of processors increase
  • There can be no more than a 5 times improvement in performance as the number of processors increase
  • 1.5 times faster
  • 1.67 times faster
  • 2 times faster
  • 3 times faster
  • Uniform memory access
  • A single physical address space
  • One physical address space per processor
  • Multiple memories shared by multiprocessors
  • SIMD Machines
  • MIMD machines
  • Shared Memory Multiprocessors
  • Distributed Shared Memory Multiprocessors
  • Most programs are too long
  • The use of cache memory for data
  • The use of cache memory for instructions
  • Because of compiler limitations
  • It is a processor that has multiple levels of cache
  • It is a processor that is efficient for all types of computing
  • It is a special purpose processor only useful for graphics processing
  • It is a processor used in all types of applications that involve data parallelism
  • 00101001.11
  • 00110100.11
  • 00110110.10
  • 00111011.01
  • In the stack
  • In the memory
  • In the CPU register
  • After OP code in the instruction
  • F = x + y’z
  • F = xy’ + yz + xz
  • F = xy + y’z + xz
  • F = xy’z + xy’z’ + x’yz + x’yz
  • AND, OR
  • OR, NOT
  • XOR, OR
  • XOR, AND
  • AND gates and MUXes
  • NOT gates and MUXes
  • OR gates and DEMUXes
  • XNOR gates and DECODERs
  • 2
  • 3
  • 4
  • 5
  • A data hazards
  • A memory faults
  • A control hazards
  • A structural hazard
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • Carry lookahead
  • Branch prediction
  • Register renaming
  • Out of order execution
  • “Hit under miss”
  • High associativity
  • Multiported caches
  • Segregated caches
  • Cache, Main Memory, Disk, Register
  • Cache, Main Memory, Register, Disk
  • Cache, Register, Main Memory, Disk
  • Register, Cache, Main Memory, Disk
  • Cache memory
  • Volatile memory
  • Non-cache memory
  • Non-volatile memory
  • 2
  • 4
  • 16
  • 32
  • Threads may use local variables
  • Threads may use private variables
  • Threads may use shared variables
  • Using a semaphore is not effective
  • Increase in speed of processor chips
  • Increase in power density of the chip
  • Increase in video and graphics processing
  • Increase in cost of semiconductor manufacturing
  • Load balancing
  • Grid computing
  • Web search engine
  • Scientific computing
  • A Monte Carlo integration
  • Any highly sequential program
  • A C++ program with lots of for loops
  • A program with fine-grained parallelism
  • Clock frequency
  • Transistors on a chip
  • Processors on a chip
  • Chip power consumption
  • Controlled transfer
  • Conditional transfer
  • Uncontrolled transfer
  • Unconditional transfer
  • 6E
  • 7D
  • 8A
  • B5
  • 1.0× 10-9
  • 10.0 × 10-9
  • 100.00 × 10-9
  • 1000.00 × 10-9
  • Commander
  • Compiler
  • Interpreter
  • Simulator
  • add
  • beq
  • jr
  • ld
  • Data memory and Register File take part
  • Instruction memory and data memory take part
  • Instruction memory, ALU, and register take part
  • Instruction memory, Register File, ALU, and data memory take part
  • Cache
  • Register
  • Hard disk
  • Main memory
  • The synchronous bus is better: 20.1 vs. 15.3 MB/s
  • The synchronous bus is better: 30 vs. 18.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 20.1 vs. 15.3 MB/s
  • RAID 4 does not use parity
  • RAID 4 uses bit-interleaved parity
  • RAID 4 uses block-interleaved parity
  • RAID 4 uses distributed block-interleaved parity
  • Multiple threads are used in multiple cores
  • Multiple threads are used in multiple processors
  • Multiple threads share a single processor, but do not overlap
  • Multiple threads share a single processor in an overlapping fashion
  • It stays the same
  • It decreases to zero
  • It approaches the execution time of the sequential part of the code
  • It approaches the execution time of the non-sequential part of the code
Computer Architecture 2
  • 1 state, 2 inputs, 2 outputs
  • 2 states, 2 inputs, 1 output
  • 3 states, 1 input, 2 outputs
  • 3 states, 2 inputs, 1 output
  • A computer that is used by one person only
  • A computer that runs only one kind of software
  • A computer that is assigned to one and only one task
  • A computer that is meant for application software only
  • DTL
  • PMOS
  • RTL
  • TTL
ab
00011110
cd001X
0111X1
11
1011
  • cd’ + bd
  • c’ + ab’
  • c’d + b’d’
  • ad + b’d’
abcz
0000
0011
0101
0111
1000
1011
1101
1111

Select one:

  • a + b
  • b + c
  • ac + b
  • a’b + c
  • Loop a times {

b = b + b

} answer = b

  • c = 0

Loop a times {

c = c + b

} answer = b

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = n

if (b < 0) answer = n – 1

}

  • Assume b > a

Loop n times {

b = b – a

if (b = 0) answer = 0

if (b < 0) answer = b + a

}

  • The decoding of the instruction
  • The reading of the program counter value
  • The execution of operation using the ALU
  • The fetching of the instruction from the instruction memory
  • Decode the instruction; execute the instruction; transfer the data
  • Decode the instruction; transfer the data; execute the instruction
  • Execute the instruction; decode the instruction; transfer the data
  • Transfer the data; execute the instruction; decode the instruction
  • One stage must wait for data from another stage in the pipeline
  • The pipeline is not able to provide any speedup to execution time
  • The next instruction is determined based on the results of the currently-executing instruction
  • Hardware is unable to support the combination of instructions that should execute in the same clock cycle
  • Caching
  • Pipelining
  • Carry lookahead
  • Branch prediction
  • Pipelining
  • Data hazard
  • Concurrency
  • Instruction level parallelism
  • The cache block number
  • Whether there is a write-through or not
  • Whether the requested word is in the cache or not
  • Whether the cache entry contains a valid address or not
  • A disk
  • A cache
  • The register files
  • The main memory
  • Tape drive; PT
  • PT; victim cache
  • Dcache; Write buffer
  • Dcache; Main memory
  • The synchronous bus is better: 25 vs. 18.2 MB/s
  • The synchronous bus is better: 30 vs. 25.2 MB/s
  • The asynchronous bus is better: 13.3 vs. 11.1 MB/s
  • The asynchronous bus is better: 30 vs. 25.2 MB/s
  • 100.2 MB/s
  • 130.6 MB/s
  • 150.8 MB/s
  • 170.0 Mb/s
  • Asynchronous
  • External
  • Internal
  • Synchronous
  • There are no redundant check disks
  • The number of redundant check disks is equal to the number of data disks
  • The number of redundant check disks is less than the number of data disks
  • The number of redundant check disks is more than the number of data disks
  • 1.3333
  • 2
  • 2.6666
  • 8
  • Weak scaling
  • Timing issues
  • Strong scaling
  • Communication overhead
  • DTL RTL CMOS TTL
  • DTL RTL TTL CMOS
  • RTL DTL TTL CMOS
  • RTL TTL DTL CMOS
  • 1
  • n
  • log n
  • 2n
  • Decoding the instruction
  • Reading the program counter value
  • Executing the operation using the ALU
  • Fetching the instruction from the instruction memory
  • The program counters
  • The output of the ALU
  • Data from data memory
  • Decoding instructions from instruction memory
  • The number of pipe stages
  • 5 times that of a non-pipelined machine
  • The ratio of the fetch cycle period to the clock period
  • The ratio of time between instructions and clock cycle time
  • Value prediction
  • Branch prediction
  • Memory unit forwarding
  • Execution unit forwarding
  • 131.0 MB/s
  • 229.4 MB/s
  • 327.9 MB/s
  • 350.1 MB/s
  • Ranking a linked list
  • A matrix multiplication
  • Any highly sequential program
  • A program with fine-grained parallelism

About Clear My Certification

Check Also

CS401: Operating Systems Certification Exam Answers

Operating systems (OS) are the backbone of modern computing, serving as the intermediary between hardware …

Leave a Reply

Your email address will not be published. Required fields are marked *