Informal Run-Time Analysis
Help Questions
AP Computer Science A › Informal Run-Time Analysis
A program searches an unsorted list of $n$ usernames to find a target name.
public static int findUser(String[] users, String target) {
for (int i = 0; i < users.length; i++) {
if (users<u>i</u>.equals(target)) {
return i;
}
}
return -1;
}
How does the run-time of this algorithm change as the input size doubles?
It roughly squares: $O(n^2)$
It roughly doubles: $O(n)$
It stays constant: $O(1)$
It roughly adds one step: $O(n-1)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on linear search through iteration. Informal run-time analysis requires understanding how an algorithm's execution time scales with input size, particularly examining loop structures and their relationship to n. In the provided linear search code, a single loop iterates through the array once, checking each element until finding the target or reaching the end. Choice A is correct because when the input size doubles from n to 2n, the worst-case number of comparisons also doubles from n to 2n, maintaining a linear O(n) relationship. Choice C (O(n²)) is incorrect because it suggests quadratic growth, which would mean doubling the input would quadruple the time, but this algorithm has only one loop dependent on n. To help students: Use concrete examples like searching through 10 vs 20 items to visualize linear growth. Reinforce that "roughly doubles" describes linear complexity, while "roughly quadruples" would indicate quadratic complexity.
A recursive method computes the $n$th Fibonacci number for small $n$ in a math game.
public static int fib(int n) {
if (n <= 1) {
return n;
} else {
return fib(n - 1) + fib(n - 2);
}
}
Which of the following best describes the algorithm's time complexity?
$O(n^2)$
$O(2^n)$
$O(n)$
$O(\log n)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on recursive algorithms with exponential complexity. Informal run-time analysis of recursive methods requires understanding the branching factor and depth of the recursion tree. In the provided Fibonacci code, each call (except base cases) makes two recursive calls, creating a binary tree of calls with depth approximately n, resulting in roughly 2ⁿ total function calls. Choice C is correct because the recursive structure creates an exponential growth pattern - each increase in n roughly doubles the number of function calls, characteristic of O(2ⁿ) complexity. Choice A (O(n)) is incorrect because it would require only n function calls total, but this naive recursive approach recalculates the same Fibonacci values many times. To help students: Draw the recursion tree for small values like fib(5) to visualize the exponential growth. Contrast this with dynamic programming solutions that achieve O(n) by avoiding redundant calculations.
A simple selection sort is used to sort $n$ product prices, regardless of current ordering.
public static void selectionSort(int[] prices) {
for (int i = 0; i < prices.length - 1; i++) {
int minIndex = i;
for (int j = i + 1; j < prices.length; j++) {
if (prices<u>j</u> < prices<u>minIndex</u>) {
minIndex = j;
}
}
int temp = prices<u>i</u>;
prices<u>i</u> = prices<u>minIndex</u>;
prices<u>minIndex</u> = temp;
}
}
What is the run-time complexity of the algorithm in the code snippet?
$O(\log n)$
$O(n^2)$
$O(n)$
$O(1)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on selection sort's nested loop structure. Informal run-time analysis involves recognizing standard sorting algorithms and their characteristic complexity patterns based on their implementation structure. In the provided selection sort code, the outer loop runs n-1 times, and for each iteration i, the inner loop searches through the remaining n-i elements to find the minimum, resulting in approximately n²/2 comparisons total. Choice C is correct because despite the decreasing inner loop iterations, the nested structure still yields O(n²) complexity, as the sum 1+2+...+(n-1) equals n(n-1)/2, which is quadratic. Choice B (O(log n)) is incorrect because logarithmic complexity requires dividing the problem size repeatedly, but selection sort examines every remaining element in each pass. To help students: Trace through small arrays to see how selection sort always makes the same number of comparisons regardless of initial ordering. Emphasize that both selection sort and bubble sort have O(n²) complexity due to their nested loop structures.
A game computes total points from a $rows \times cols$ grid of tiles, where $rows=cols=n$.
public static int totalPoints(int[][] board) {
int sum = 0;
for (int r = 0; r < board.length; r++) {
for (int c = 0; c < board<u>0</u>.length; c++) {
if (board<u>r</u><u>c</u> > 0) {
sum += board<u>r</u><u>c</u>;
}
}
}
return sum;
}
Which of the following best describes the algorithm's time complexity?
$O(n)$
$O(n^2)$
$O(n \log n)$
$O(1)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on 2D array traversal with nested loops. Informal run-time analysis requires understanding how multiple dimensions affect complexity, particularly when processing grid-like data structures. In the provided code, the outer loop iterates through n rows, and the inner loop iterates through n columns (since rows=cols=n), resulting in n×n = n² total cell visits. Choice B is correct because visiting every cell in an n×n grid requires O(n²) operations, as each of the n² cells must be examined exactly once to compute the sum. Choice A (O(n)) is incorrect because it would only account for processing a single row or column, not the entire 2D grid, missing the multiplicative effect of the nested loops. To help students: Visualize small grids (like 3×3 or 4×4) and count the total cells to see the quadratic relationship. Explain that for square matrices, the complexity is based on the total number of elements, which is n².
A program multiplies two $n \times n$ matrices to produce an $n \times n$ result.
public static int[][] multiply(int[][] a, int[][] b) {
int n = a.length;
int[][] c = new int<u>n</u><u>n</u>;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < n; k++) {
c<u>i</u><u>j</u> += a<u>i</u><u>k</u> * b<u>k</u><u>j</u>;
}
}
}
return c;
}
Which of the following best describes the algorithm's time complexity?
$O(1)$
$O(n^3)$
$O(n)$
$O(n^2)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on matrix multiplication with triple nested loops. Informal run-time analysis of matrix operations requires understanding how each dimension contributes to the overall complexity. In the provided code, there are three nested loops, each running n times: i iterates through rows, j through columns, and k performs the dot product calculation, resulting in n³ total multiplication operations. Choice B is correct because the triple nested loop structure, where each loop depends on n, creates O(n³) complexity - for each of the n² elements in the result matrix, we perform n multiplications. Choice A (O(n²)) is incorrect because it would only account for visiting each cell in the result matrix once, but computing each cell requires n operations for the dot product. To help students: Visualize matrix multiplication for small matrices (like 2×2) to count operations. Explain that the number of loops often indicates the power of n in the complexity.
A recursive method counts how many times a number can be halved before reaching 1, for input size $n$.
public static int halveCount(int n) {
if (n <= 1) {
return 0;
} else {
return 1 + halveCount(n / 2);
}
}
What is the run-time complexity of the algorithm in the code snippet?
$O(\log n)$
$O(1)$
$O(n)$
$O(n^2)$
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on recursive algorithms with logarithmic complexity. Informal run-time analysis of recursive methods requires examining how the problem size changes with each recursive call and counting the total number of calls. In the provided code, each recursive call divides n by 2, continuing until n reaches 1, which happens after approximately log₂(n) divisions. Choice C is correct because halving the input at each step creates a logarithmic relationship - doubling the input size adds only one more recursive call, characteristic of O(log n) complexity. Choice A (O(n)) is incorrect because it would require the recursion depth to be proportional to n, but here we're dividing by 2 each time, not subtracting 1. To help students: Trace through specific values like n=8 (3 calls) and n=16 (4 calls) to see the logarithmic pattern. Connect this to binary search and other divide-and-conquer algorithms that exhibit similar O(log n) behavior.
A teacher multiplies two $n \times n$ matrices using the Java code below. Which of the following best describes the algorithm's time complexity?
public static int[][] multiply(int[][] a, int[][] b) {
int n = a.length;
int[][] c = new int<u>n</u><u>n</u>;
for (int i = 0; i < n; i++) {
for (int j = 0; j < n; j++) {
for (int k = 0; k < n; k++) {
c<u>i</u><u>j</u> += a<u>i</u><u>k</u> * b<u>k</u><u>j</u>;
}
}
}
return c;
}
```
O(n)
$O(n^2$)
$O(n^3$)
O(1)
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on selection and iteration. Informal run-time analysis involves estimating an algorithm's efficiency by examining its structure, such as loops and conditionals, to determine its Big O complexity. The matrix multiplication code contains three nested loops, each iterating n times, where n is the dimension of the square matrices being multiplied. Choice B is correct because the triple-nested structure results in n×n×n = n³ total operations, as each element of the result matrix requires n multiplications and additions. Choice A (O(n²)) is incorrect because it only accounts for the number of elements in the result matrix, not the n operations required to compute each element. To help students: Use small matrices (2×2 or 3×3) to count operations manually, emphasize that matrix multiplication inherently requires three indices (row, column, and summation), and discuss how the standard algorithm's O(n³) complexity has motivated research into faster methods.
A student uses the recursive method below to compute Fibonacci numbers for input $n$. Which of the following best describes the algorithm's time complexity?
public static int fib(int n) {
if (n <= 1) {
return n;
}
return fib(n - 1) + fib(n - 2);
}
O(n)
O(log n)
$O(2^n$)
O(1)
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on selection and iteration. Informal run-time analysis involves estimating an algorithm's efficiency by examining its structure, such as loops and conditionals, to determine its Big O complexity. This recursive Fibonacci implementation makes two recursive calls for each non-base case, creating a binary tree of calls where the number of nodes approximately doubles at each level, resulting in exponential growth. Choice C is correct because the recursion tree has approximately 2ⁿ nodes (more precisely, about 1.618ⁿ due to overlapping subproblems), making the time complexity O(2ⁿ). Choice A (O(n)) is incorrect because it would only apply to an iterative or memoized solution that computes each Fibonacci number once, not this naive recursive approach. To help students: Draw recursion trees for small values of n, highlight repeated calculations (like fib(2) being computed multiple times), and contrast with dynamic programming solutions to show how memoization reduces exponential to linear complexity.
A program searches an unsorted list of $n$ product IDs for a match using the code below. What is the run-time complexity of the algorithm in the code snippet?
public static int findId(int[] ids, int target) {
for (int i = 0; i < ids.length; i++) {
if (ids<u>i</u> == target) {
return i;
}
}
return -1;
}
```
O(1)
O(n)
O(log n)
$O(n^2$)
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on selection and iteration. Informal run-time analysis involves estimating an algorithm's efficiency by examining its structure, such as loops and conditionals, to determine its Big O complexity. The provided code shows a linear search through an unsorted array, with a single for loop that iterates through each element once until finding the target or reaching the end. Choice B is correct because in the worst case (target not found or at the end), the algorithm must check all n elements, resulting in O(n) linear time complexity. Choice A (O(log n)) is incorrect because logarithmic complexity requires dividing the problem space (like binary search on sorted data), which isn't possible with unsorted data. To help students: Emphasize that single loops through data typically indicate O(n) complexity, discuss how early returns don't change worst-case analysis, and contrast linear search with binary search to reinforce the importance of data organization.
A student uses the recursive Java method below to compute a value from an input $n$. Which of the following best describes the algorithm's time complexity?
public static int sumToN(int n) {
if (n <= 0) {
return 0;
}
return n + sumToN(n - 1);
}
```
O(log n)
$O(n^2$)
O(1)
O(n)
Explanation
This question tests AP Computer Science A skills in informal run-time analysis, focusing on selection and iteration. Informal run-time analysis involves estimating an algorithm's efficiency by examining its structure, such as loops and conditionals, to determine its Big O complexity. The recursive method makes exactly n recursive calls (from n down to 0), with each call performing a constant amount of work (one addition and one recursive call). Choice A is correct because the recursion depth is n, and each level does O(1) work, resulting in O(n) total time complexity - essentially equivalent to a loop from 1 to n. Choice D (O(n²)) is incorrect because there's no nested iteration or multiple recursive calls per level that would create quadratic behavior. To help students: Draw recursion trees to visualize the call stack, relate recursive solutions to their iterative equivalents, and emphasize that single recursive calls typically maintain the same complexity as the recursion depth.