Algorithmic Efficiency
Help Questions
AP Computer Science Principles › Algorithmic Efficiency
A smartwatch app computes Fibonacci numbers for animations; it may request n up to 45, and each computation must finish under 0.1 seconds. Two approaches are given:
Pseudocode (Naive Recursion):
FIB_RECURSIVE(n)
IF n <= 1
RETURN n
RETURN FIB_RECURSIVE(n-1) + FIB_RECURSIVE(n-2)
Time complexity: $O(2^n)$
Pseudocode (Dynamic Programming):
FIB_DP(n)
IF n <= 1
RETURN n
prev <- 0
curr <- 1
FOR i <- 2 TO n
next <- prev + curr
prev <- curr
curr <- next
RETURN curr
Time complexity: $O(n)$
Considering the time complexities mentioned, which algorithm has a lower time complexity for large inputs?
They are equivalent, because both compute the same Fibonacci values in the same steps.
Naive recursion, because $O(2^n)$ is smaller than $O(n)$ for large $n$.
Dynamic programming, because $O(n)$ grows far slower than $O(2^n)$ as $n$ increases.
Naive recursion, because it uses fewer variables and therefore runs in $O(n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding exponential versus linear time complexity in recursive algorithms. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between naive recursive Fibonacci and dynamic programming highlights how memoization dramatically improves efficiency, with naive recursion having exponential O(2ⁿ) complexity while dynamic programming achieves linear O(n) complexity. Choice B is correct because it accurately identifies dynamic programming as more efficient, recognizing that O(n) grows far slower than O(2ⁿ) - for n=45, this is the difference between 45 operations versus over 35 trillion operations. This demonstrates understanding of exponential growth's severity. Choice C is incorrect because it claims O(2ⁿ) is smaller than O(n) for large n, which is backwards - exponential functions grow explosively compared to linear functions. This error often occurs when students misread the notation or don't understand exponential growth. To help students: Calculate actual values for small n to show exponential explosion (e.g., 2¹⁰=1024 vs 10). Trace through recursive calls to visualize repeated subproblems. Watch for: confusion about exponential notation or underestimating how quickly exponential functions grow.
A data team must sort sensorReadings each minute. Input size can spike from 1,000 to 100,000 readings, and each minute’s sort must finish before the next batch arrives. Two algorithms are compared:
Pseudocode (Bubble Sort):
procedure bubbleSort(sensorReadings)
n <- length(sensorReadings)
for i <- 1 to n - 1
for j <- 1 to n - i
if sensorReadings<u>j</u> > sensorReadings<u>j + 1</u>
swap(sensorReadings<u>j</u>, sensorReadings<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(sensorReadings)
if length(sensorReadings) <= 1
return sensorReadings
mid <- length(sensorReadings) / 2
left <- mergeSort(sensorReadings<u>1..mid</u>)
right <- mergeSort(sensorReadings<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Considering the time complexities mentioned, which algorithm has a lower time complexity for large inputs?
Merge sort, because $O(n \log n)$ grows slower than $O(n^2)$ as input increases.
Merge sort, because $O(n \log n)$ is the same as $O(n^2)$ when $n$ is large.
Bubble sort, because swapping adjacent values makes it $O(\log n)$ in practice.
Bubble sort, because Big O ignores input limits and focuses on constant-time swaps.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically comparing quadratic and linearithmic sorting algorithms. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort (O(n²)) and merge sort (O(n log n)) for sorting sensor readings that can spike to 100,000 items highlights how algorithm choice becomes critical for large, time-sensitive datasets. Choice A is correct because it accurately identifies merge sort as more efficient for large inputs, with O(n log n) growing much slower than O(n²) - for n=100,000, this means roughly 1.7 million operations versus 10 billion operations. This demonstrates understanding of how different growth rates impact real-world performance. Choice B is incorrect because it claims bubble sort's adjacent swapping makes it O(log n), which is false - the nested loops clearly show O(n²) behavior regardless of how swaps are performed. This error often occurs when students focus on implementation details rather than loop structure. To help students: Calculate actual operation counts for n=100,000 to show the dramatic difference. Explain that meeting time constraints often requires choosing algorithms with better asymptotic complexity. Watch for: students who think implementation tricks can change fundamental time complexity or who underestimate the impact of quadratic growth on large datasets.
A coding contest problem asks for Fibonacci(n) with $n \le 35$ and a 1-second limit. Two solutions are shown:
Pseudocode (Naive Recursion):
procedure fibRecursive(n)
if n <= 1
return n
return fibRecursive(n - 1) + fibRecursive(n - 2)
Time complexity: $O(2^n)$.
Pseudocode (Dynamic Programming):
procedure fibDP(n)
if n <= 1
return n
prev <- 0
curr <- 1
for i <- 2 to n
next <- prev + curr
prev <- curr
curr <- next
return curr
Time complexity: $O(n)$.
What is the primary advantage of using fibDP over fibRecursive?
It guarantees exact results only for large n, unlike recursion.
It sorts intermediate values, which is why it improves from $O(n^2)$ to $O(n)$.
It reduces repeated work, changing growth from $O(2^n)$ to $O(n)$.
It makes Fibonacci independent of n, so the runtime becomes $O(1)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding how dynamic programming eliminates redundant computation. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison shows how naive recursion recalculates the same Fibonacci values multiple times (leading to $O(2^n$) complexity), while dynamic programming stores and reuses previously calculated values (achieving O(n) complexity). Choice A is correct because it accurately identifies that dynamic programming reduces repeated work, changing growth from exponential $O(2^n$) to linear O(n) by storing intermediate results instead of recalculating them. This demonstrates understanding of memoization's impact on efficiency. Choice B is incorrect because it claims the runtime becomes O(1) (constant time), which would mean the algorithm takes the same time regardless of n - this is false as the loop still runs n times. This error often occurs when students confuse improved efficiency with constant-time operations. To help students: Draw the recursion tree for fib(5) showing repeated calculations, then show how DP calculates each value only once. Emphasize that O(n) is still dependent on n, just linearly rather than exponentially. Watch for: students who think any optimization makes algorithms constant-time or who don't understand how storing results prevents recalculation.
A club leader sorts nameList for printing. The list is usually small (around 20 names), but can reach 5,000. Two algorithms are available:
Pseudocode (Bubble Sort):
procedure bubbleSort(nameList)
n <- length(nameList)
for i <- 1 to n - 1
for j <- 1 to n - i
if nameList<u>j</u> > nameList<u>j + 1</u>
swap(nameList<u>j</u>, nameList<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(nameList)
if length(nameList) <= 1
return nameList
mid <- length(nameList) / 2
left <- mergeSort(nameList<u>1..mid</u>)
right <- mergeSort(nameList<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Considering the time complexities mentioned, under what conditions does mergeSort outperform bubbleSort?
When nameList is tiny, because $O(n^2)$ grows slower than $O(n \log n)$.
When nameList is large, because $O(n \log n)$ scales better than $O(n^2)$.
When the list is unsorted, because merge sort becomes $O(n^2)$ in the worst case.
When input size is irrelevant, because Big O ignores $n$ and focuses only on swaps.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when different time complexities matter in practice. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort (O(n²)) and merge sort (O(n log n)) highlights how the advantage of better time complexity becomes significant as input size grows, particularly when the list can reach 5,000 names. Choice A is correct because it accurately identifies that merge sort outperforms bubble sort when nameList is large, due to O(n log n) scaling better than O(n²) as n increases. This demonstrates understanding of how algorithmic efficiency impacts real-world performance. Choice B is incorrect because it reverses the relationship, claiming O(n²) grows slower than O(n log n), which is mathematically false. This error often occurs when students don't understand how to compare growth rates of different functions. To help students: Create tables showing actual values of n² vs n log n for various n values. Discuss that for small inputs (like 20 names), the difference may be negligible, but for larger inputs (like 5,000), it becomes significant. Watch for: students who think 'smaller' Big O always means faster regardless of input size, or who don't understand that both algorithms work correctly but differ in speed.
A help-desk tool searches a sortedList of ticket numbers that can grow to 2,000,000 entries; each lookup must be fast. Two algorithms are implemented:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, target)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = target
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, target)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = target
return mid
else if sortedList<u>mid</u> < target
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Based on the algorithms described, under what conditions does binarySearch outperform linearSearch?
When the target is missing, because binary search changes to $O(n)$ in the worst case.
When the list is large, because linear search becomes $O(\log n)$ after enough queries.
When the list is unsorted, because binary search does not depend on ordering.
When the list is large and sorted, because $O(\log n)$ grows slower than $O(n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when binary search provides advantages over linear search. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search (O(n)) and binary search (O(log n)) on a sorted list of up to 2,000,000 ticket numbers highlights how logarithmic algorithms excel on large, sorted datasets. Choice A is correct because it accurately identifies that binary search outperforms linear search when the list is large and sorted, as O(log n) grows much slower than O(n) - for 2 million entries, binary search needs at most 21 comparisons versus potentially 2 million. This demonstrates understanding of both algorithm requirements and efficiency benefits. Choice B is incorrect because it claims binary search doesn't depend on ordering, when in fact binary search absolutely requires a sorted list to function correctly. This error often occurs when students memorize algorithm names without understanding their fundamental requirements. To help students: Emphasize that binary search's divide-and-conquer approach only works with sorted data. Show that log₂(2,000,000) ≈ 21, making the efficiency gain dramatic. Watch for: students who forget binary search's sorted list requirement or who don't appreciate the massive difference between O(n) and O(log n) at scale.
A library kiosk searches for a book ID in a sortedList with up to 10,000,000 IDs; each query must finish in under 50 ms. Two search methods are considered:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, targetID)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = targetID
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, targetID)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = targetID
return mid
else if sortedList<u>mid</u> < targetID
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Based on the algorithms described, which algorithm has a lower time complexity for large inputs?
Binary search, because it works best even when the list is not sorted.
Linear search, because it checks items in order and avoids dividing the range.
Binary search, because $O(\log n)$ grows slower than $O(n)$ as $n$ increases.
Linear search, because $O(n)$ is the same as $O(\log n)$ for large lists.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically comparing search algorithm time complexities on sorted data. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search and binary search on a sorted list highlights how binary search's O(log n) time complexity provides a massive advantage over linear search's O(n) when searching through up to 10,000,000 IDs. Choice B is correct because it accurately identifies binary search as more efficient for large inputs, due to its O(log n) time complexity growing much slower than O(n) as n increases. This demonstrates understanding that logarithmic growth is dramatically better than linear growth for large datasets. Choice D is incorrect because it claims binary search works on unsorted lists, which is false - binary search requires a sorted list to function correctly. This error often occurs when students memorize algorithm names without understanding their prerequisites. To help students: Calculate actual comparisons needed for both algorithms with n=10,000,000 (linear: up to 10 million, binary: about 23). Emphasize that binary search only works on sorted data. Watch for: students who forget the sorted list requirement or who don't grasp the dramatic difference between O(n) and O(log n) for large n.
A wearable device computes Fibonacci numbers for an animation. It must handle n up to 40 under a tight time limit. Two implementations are considered:
Pseudocode (Naive Recursion):
procedure fibRecursive(n)
if n <= 1
return n
return fibRecursive(n - 1) + fibRecursive(n - 2)
Time complexity: $O(2^n)$.
Pseudocode (Dynamic Programming):
procedure fibDP(n)
if n <= 1
return n
fibValues<u>0</u> <- 0
fibValues<u>1</u> <- 1
for i <- 2 to n
fibValues<u>i</u> <- fibValues<u>i - 1</u> + fibValues<u>i - 2</u>
return fibValues<u>n</u>
Time complexity: $O(n)$.
Based on the algorithms described, which algorithm has a lower time complexity for large inputs?
Naive recursion, because it uses fewer variables and therefore has lower Big O time.
Dynamic programming, because its time complexity is $O(2^n)$ due to the loop.
Naive recursion, because $O(2^n)$ grows slowly until $n$ becomes very large.
Dynamic programming, because $O(n)$ increases much more slowly than $O(2^n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding exponential versus linear time complexity in recursive problems. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between naive recursive Fibonacci $(O(2^n$)) and dynamic programming Fibonacci (O(n)) highlights how memoization or iteration can dramatically reduce time complexity by avoiding redundant calculations. Choice B is correct because it accurately identifies dynamic programming as more efficient for large inputs, with O(n) growing much more slowly than $O(2^n$) - for n=40, this is the difference between about 40 operations versus over a trillion. This demonstrates understanding of exponential growth's severity. Choice A is incorrect because it claims $O(2^n$) grows slowly until n becomes very large, when in reality exponential growth becomes problematic even for moderate values like n=40. This error often occurs when students underestimate exponential growth rates. To help students: Calculate actual values - for n=40, $2^40$ ≈ 1.1 trillion while 40 is just 40. Visualize the recursion tree to show repeated subproblems in naive approach. Watch for: students who don't grasp how quickly exponential functions grow or who think all recursive solutions are inefficient.
A school app must sort scoreList of up to 200,000 integers in under 2 seconds. Two options are tested:
Pseudocode (Bubble Sort):
procedure bubbleSort(scoreList)
n <- length(scoreList)
for i <- 1 to n - 1
for j <- 1 to n - i
if scoreList<u>j</u> > scoreList<u>j + 1</u>
swap(scoreList<u>j</u>, scoreList<u>j + 1</u>)
Time complexity: $O(n^2)$.
Pseudocode (Merge Sort):
procedure mergeSort(scoreList)
if length(scoreList) <= 1
return scoreList
mid <- length(scoreList) / 2
left <- mergeSort(scoreList<u>1..mid</u>)
right <- mergeSort(scoreList<u>mid+1..end</u>)
return merge(left, right)
Time complexity: $O(n \log n)$.
Based on the algorithms described, which algorithm has a lower time complexity for large inputs?
Merge sort, because $O(n \log n)$ grows slower than $O(n^2)$ for large $n$.
Bubble sort, because $O(n^2)$ is smaller than $O(n \log n)$ for large $n$.
Bubble sort, because nested loops reduce comparisons as the list grows.
Merge sort, because it is always faster regardless of input size or constraints.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding and comparing algorithm time complexities. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between bubble sort and merge sort highlights how input size affects their time complexities, with bubble sort having a time complexity of O(n²) while merge sort has a time complexity of O(n log n). Choice B is correct because it accurately identifies merge sort as more efficient for large inputs, due to its lower time complexity of O(n log n), which grows much slower than O(n²) as n increases to 200,000. This demonstrates understanding of efficiency analysis. Choice C is incorrect because it misunderstands the relationship between O(n²) and O(n log n), incorrectly claiming that O(n²) is smaller for large n. This error often occurs when students misinterpret Big O notation or fail to understand how different functions grow. To help students: Use graphs to visualize how n², n log n, and other functions grow as n increases. Emphasize that for large values like 200,000, the difference between O(n²) and O(n log n) becomes dramatic. Watch for: students who memorize algorithm names without understanding their complexities or who confuse the meaning of 'smaller' in Big O context.
A music app searches for a song title in a sortedList of titles. The list can be as small as 30 titles or as large as 3,000,000, and the app has a strict time limit per search. Two algorithms are available:
Pseudocode (Linear Search):
procedure linearSearch(sortedList, targetTitle)
for index <- 1 to length(sortedList)
if sortedList<u>index</u> = targetTitle
return index
return -1
Time complexity: $O(n)$.
Pseudocode (Binary Search):
procedure binarySearch(sortedList, targetTitle)
low <- 1
high <- length(sortedList)
while low <= high
mid <- (low + high) / 2
if sortedList<u>mid</u> = targetTitle
return mid
else if sortedList<u>mid</u> < targetTitle
low <- mid + 1
else
high <- mid - 1
return -1
Time complexity: $O(\log n)$.
Considering the time complexities mentioned, which is more efficient for small datasets?
Linear search, because $O(n)$ becomes $O(1)$ whenever the target is near the end.
Binary search, because $O(\log n)$ is always smaller than $O(n)$ for any $n$.
Linear search, because small $n$ can make the difference between $O(n)$ and $O(\log n)$ minor.
Binary search, because it runs in $O(n)$ time when the list is already sorted.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding when simpler algorithms might be preferable despite worse Big O complexity. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search (O(n)) and binary search (O(log n)) on small datasets (as small as 30 titles) highlights that Big O notation describes growth rates, not absolute performance for all input sizes. Choice B is correct because it accurately identifies that for small datasets, the difference between O(n) and O(log n) can be minor, and linear search's simpler implementation might even perform better due to lower overhead. This demonstrates understanding that Big O analysis is most relevant for large inputs. Choice A is incorrect because it claims O(log n) is always smaller than O(n) for any n, ignoring that for small values, implementation overhead and constant factors matter. This error often occurs when students treat Big O as an absolute measure rather than an asymptotic one. To help students: Show actual runtime comparisons for n=30 where linear search might complete in microseconds. Discuss how binary search's overhead (calculating midpoints, maintaining bounds) can outweigh benefits for small n. Watch for: students who think better Big O always means faster execution regardless of input size or implementation details.
A library system searches a sortedList of up to 5,000,000 book IDs, and each lookup must finish within 1 millisecond. Two algorithms are considered:
Pseudocode (Linear Search):
LINEAR_SEARCH(sortedList, target)
FOR i <- 1 TO LENGTH(sortedList)
IF sortedList<u>i</u> = target
RETURN i
RETURN -1
Time complexity: $O(n)$
Pseudocode (Binary Search):
BINARY_SEARCH(sortedList, target)
low <- 1
high <- LENGTH(sortedList)
WHILE low <= high
mid <- (low + high) / 2
IF sortedList<u>mid</u> = target
RETURN mid
ELSE IF sortedList<u>mid</u> < target
low <- mid + 1
ELSE
high <- mid - 1
RETURN -1
Time complexity: $O(\log n)$
Considering the time complexities mentioned, which algorithm has a lower time complexity for large inputs?
Binary search, because $O(\log n)$ grows more slowly than $O(n)$ as $n$ increases.
Linear search, because scanning avoids division and is therefore asymptotically faster.
Binary search, because it is $O(n)$ in the worst case on any list.
Linear search, because the list is sorted and that makes $O(n)$ become $O(\log n)$.
Explanation
This question tests AP Computer Science Principles on algorithmic efficiency, specifically understanding and comparing algorithm time complexities for search operations. Algorithmic efficiency refers to how effectively an algorithm performs in terms of time and space, often expressed using Big O notation to represent time complexity. In this passage, the comparison between linear search and binary search highlights how sorted data enables more efficient searching, with linear search having O(n) time complexity while binary search achieves O(log n) time complexity. Choice A is correct because it accurately identifies binary search as more efficient for large inputs, recognizing that O(log n) grows much more slowly than O(n) as the input size increases. This demonstrates understanding of logarithmic versus linear growth. Choice C is incorrect because it claims that being sorted makes linear search O(log n), which is false - linear search remains O(n) regardless of whether the list is sorted. This error often occurs when students confuse the prerequisites for binary search with its effect on linear search. To help students: Demonstrate with concrete numbers how log n grows (e.g., log₂(1,000,000) ≈ 20 versus 1,000,000 comparisons). Emphasize that binary search requires sorted data but linear search's complexity is unchanged by sorting. Watch for: students thinking sorting automatically improves any search algorithm or misunderstanding logarithmic growth.