Recursive Searching and Sorting
Help Questions
AP Computer Science A › Recursive Searching and Sorting
What is the time complexity of this recursive binary search on a sorted integer array of size $n$?
public class Searcher {
public static int binarySearch(int[] nums, int target, int low, int high) {
if (low > high) {
return -1;
}
int mid = (low + high) / 2;
if (nums<u>mid</u> == target) {
return mid;
}
if (target < nums<u>mid</u>) {
return binarySearch(nums, target, low, mid - 1);
} else {
return binarySearch(nums, target, mid + 1, high);
}
}
}
$O(1)$ because it uses constant extra variables.
$O(n)$ because it may check every element.
$O(\log n)$ because each call halves the interval.
$O(n \log n)$ because it divides and then merges.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on analyzing the time complexity of recursive binary search. Time complexity analysis requires understanding how the number of operations grows with input size n. In binary search, each recursive call eliminates half of the remaining elements by choosing to search either the left or right half based on the comparison with the middle element. Choice D is correct because the algorithm halves the search interval with each recursive call, resulting in at most log₂(n) recursive calls before the base case is reached, giving O(log n) time complexity. Choice A is incorrect because O(n) would mean checking every element, which contradicts binary search's divide-and-conquer approach that skips half the elements at each step. To help students: Draw recursion trees showing how the problem size decreases from n to n/2 to n/4, etc. Emphasize that the number of times you can divide n by 2 until reaching 1 is log₂(n), which is the maximum recursion depth.
For this recursive merge sort, what is the base case that stops further splitting of the array?
public class Sorter {
public static void mergeSort(int[] nums, int left, int right) {
// Base case: one element (or empty) segment is already sorted
if (left >= right) {
return;
}
int mid = (left + right) / 2;
// Recursive step: sort both halves
mergeSort(nums, left, mid);
mergeSort(nums, mid + 1, right);
// Merge sorted halves
merge(nums, left, mid, right);
}
private static void merge(int[] nums, int left, int mid, int right) {
int[] temp = new int<u>right - left + 1</u>;
int i = left;
int j = mid + 1;
int k = 0;
// Merge elements in sorted order
while (i <= mid && j <= right) {
if (nums<u>i</u> <= nums<u>j</u>) {
temp<u>k++</u> = nums<u>i++</u>;
} else {
temp<u>k++</u> = nums<u>j++</u>;
}
}
while (i <= mid) {
temp<u>k++</u> = nums<u>i++</u>;
}
while (j <= right) {
temp<u>k++</u> = nums<u>j++</u>;
}
// Copy back
for (int t = 0; t < temp.length; t++) {
nums<u>left + t</u> = temp<u>t</u>;
}
}
}
When mid == 0, return immediately.
When left >= right, return immediately.
When the array is in descending order, return immediately.
When left > right, return immediately.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on identifying the base case in recursive merge sort. The base case in merge sort represents when a segment is so small that it's already sorted by definition. In this implementation, the condition if (left >= right) serves as the base case, stopping recursion when the segment has one element (left == right) or is empty (left > right). Choice C is correct because when left >= right, the segment contains at most one element, which is inherently sorted and requires no further splitting or merging. Choice A is incorrect because it only covers the empty segment case (left > right) but misses the single-element case (left == right), which is also a valid base case. To help students: Explain that arrays of size 0 or 1 are sorted by definition. Trace through examples showing how recursive calls eventually reach segments where left equals right, representing single elements that form the building blocks for merging.
Given this recursive binary search, what is the base case that stops recursion when the target cannot be found?
public class Searcher {
// Returns index of target in sorted array, or -1 if not found
public static int binarySearch(int[] nums, int target, int low, int high) {
// Base case: interval is empty
if (low > high) {
return -1;
}
int mid = (low + high) / 2;
// Found target
if (nums<u>mid</u> == target) {
return mid;
}
// Recursive step: search one half
if (target < nums<u>mid</u>) {
return binarySearch(nums, target, low, mid - 1);
} else {
return binarySearch(nums, target, mid + 1, high);
}
}
}
When low == high, return -1.
When nums[mid] != target, return -1.
When mid == 0, return -1.
When low > high, return -1.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on understanding the base case in recursive binary search. In recursive algorithms, the base case is the condition that stops the recursion and prevents infinite calls. In this binary search implementation, the base case if (low > high) occurs when the search interval becomes invalid (empty), meaning the target cannot be found in the remaining search space. Choice B is correct because when low > high, it indicates that the search boundaries have crossed, which happens after repeatedly narrowing the search interval without finding the target. Choice A is incorrect because low == high represents a valid single-element interval that still needs to be checked. To help students: Emphasize that base cases prevent infinite recursion and represent the simplest form of the problem. Practice tracing through examples where the target is not found, observing how the interval shrinks until low exceeds high.
In this recursive binary search on a sorted array, how does the recursive call reduce the search interval each time?
public class Searcher {
// Example call: binarySearch(new int[]{2,4,7,9,12,15}, 9, 0, 5)
public static int binarySearch(int[] nums, int target, int low, int high) {
if (low > high) {
return -1;
}
int mid = (low + high) / 2;
if (nums<u>mid</u> == target) {
return mid;
}
// Recursive step: discard half the array
if (target < nums<u>mid</u>) {
return binarySearch(nums, target, low, mid - 1);
} else {
return binarySearch(nums, target, mid + 1, high);
}
}
}
It swaps elements so the target moves toward the middle.
It recurses on only one half by updating low or high past mid.
It always searches both halves and returns the smaller index.
It increments low by 1 until target is found.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on how recursive binary search reduces the problem size. Binary search achieves efficiency by eliminating half of the remaining elements with each recursive call, leveraging the sorted nature of the array. In this implementation, after comparing the target with the middle element, the algorithm makes a recursive call with adjusted boundaries: either binarySearch(nums, target, low, mid - 1) for the left half or binarySearch(nums, target, mid + 1, high) for the right half. Choice C is correct because it accurately describes how the recursive call updates either the low or high parameter to exclude the already-checked middle element and its corresponding half. Choice A is incorrect because binary search doesn't increment linearly through elements; it jumps to the middle of intervals. To help students: Use visual diagrams showing how the search interval shrinks with each recursive call. Trace through specific examples showing how low and high boundaries change, emphasizing that we always exclude the middle element in the next recursive call.
In this recursive merge sort, how does the recursive call structure ensure the entire array becomes sorted?
public class Sorter {
public static void mergeSort(int[] nums, int left, int right) {
if (left >= right) {
return;
}
int mid = (left + right) / 2;
// Recursive calls sort subarrays
mergeSort(nums, left, mid);
mergeSort(nums, mid + 1, right);
// Merge combines two sorted subarrays
merge(nums, left, mid, right);
}
private static void merge(int[] nums, int left, int mid, int right) {
int[] temp = new int<u>right - left + 1</u>;
int i = left, j = mid + 1, k = 0;
while (i <= mid && j <= right) {
temp<u>k++</u> = (nums<u>i</u> <= nums<u>j</u>) ? nums<u>i++</u> : nums<u>j++</u>;
}
while (i <= mid) temp<u>k++</u> = nums<u>i++</u>;
while (j <= right) temp<u>k++</u> = nums<u>j++</u>;
for (int t = 0; t < temp.length; t++) nums<u>left + t</u> = temp<u>t</u>;
}
}
Each call sorts both halves, then merge combines them into one sorted segment.
Each call reverses its segment, so merges become sorted automatically.
Each call sorts only the left half, and the right half stays unchanged.
Each call searches for the minimum element and swaps it into place.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on understanding the divide-and-conquer strategy in merge sort. Merge sort works by recursively breaking down the problem into smaller subproblems until they're trivial to solve, then combining solutions. In this implementation, each call to mergeSort first recursively sorts the left half (left to mid), then the right half (mid+1 to right), and finally merges these two sorted halves into one sorted segment. Choice B is correct because it accurately describes the two-phase process: recursive calls sort both halves independently, then the merge function combines these sorted halves while maintaining order. Choice D is incorrect because merge sort must sort both halves, not just the left half, to ensure the entire array becomes sorted. To help students: Use tree diagrams to visualize the recursive decomposition and subsequent merging phases. Emphasize that the 'magic' happens in the merge step, which combines two sorted sequences into one larger sorted sequence.
How does this recursive binary search handle the input array during recursive steps?
public class Searcher {
// Searches within nums<u>low..high</u> without creating new arrays
public static int binarySearch(int[] nums, int target, int low, int high) {
if (low > high) {
return -1;
}
int mid = (low + high) / 2;
if (nums<u>mid</u> == target) {
return mid;
}
if (target < nums<u>mid</u>) {
return binarySearch(nums, target, low, mid - 1);
} else {
return binarySearch(nums, target, mid + 1, high);
}
}
}
It creates a new subarray for each recursive call.
It sorts the array again before each recursive call.
It reverses the array when target is smaller than nums[mid].
It searches using the same array but changes low and high.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on understanding how recursive binary search manages memory and array access. Efficient recursive algorithms often work with the original data structure using index boundaries rather than creating copies. In this implementation, the same array reference nums is passed to each recursive call, but the search boundaries are adjusted through the low and high parameters, effectively limiting which portion of the array is being searched. Choice C is correct because it accurately describes that the algorithm reuses the same array throughout all recursive calls while changing only the index boundaries to focus on different segments. Choice A is incorrect because creating new subarrays would be inefficient and unnecessary, as we can simply adjust indices to work with different portions of the original array. To help students: Emphasize the difference between passing array references versus creating array copies. Use memory diagrams to show how all recursive calls share the same array in memory, with only the low and high values on the call stack changing.
What is the time complexity of this recursive merge sort on an array of size $n$?
public class Sorter {
public static void mergeSort(int[] nums, int left, int right) {
if (left >= right) {
return;
}
int mid = (left + right) / 2;
mergeSort(nums, left, mid);
mergeSort(nums, mid + 1, right);
merge(nums, left, mid, right);
}
private static void merge(int[] nums, int left, int mid, int right) {
int[] temp = new int<u>right - left + 1</u>;
int i = left, j = mid + 1, k = 0;
while (i <= mid && j <= right) {
temp<u>k++</u> = (nums<u>i</u> <= nums<u>j</u>) ? nums<u>i++</u> : nums<u>j++</u>;
}
while (i <= mid) temp<u>k++</u> = nums<u>i++</u>;
while (j <= right) temp<u>k++</u> = nums<u>j++</u>;
for (int t = 0; t < temp.length; t++) nums<u>left + t</u> = temp<u>t</u>;
}
}
$O(n^2)$ because it compares every pair of elements.
$O(n)$ because each element moves at most once.
$O(\log n)$ because it only splits the array.
$O(n \log n)$ because it splits and merges across levels.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on analyzing the time complexity of recursive merge sort. Understanding merge sort's complexity requires analyzing both the recursive splitting and the merging phases. The algorithm creates a recursion tree with log n levels (since we halve the problem size at each level), and at each level, the merge operations collectively process all n elements. Choice D is correct because merge sort has O(n log n) time complexity: there are O(log n) levels in the recursion tree, and each level requires O(n) work to merge all segments at that level. Choice C is incorrect because O(n²) would imply nested loops comparing every pair, but merge sort's divide-and-conquer approach avoids this quadratic behavior. To help students: Draw the recursion tree showing how n elements are split into n single-element arrays across log n levels. Calculate the total work at each level (always n) and multiply by the number of levels (log n) to get n log n total operations.
How does the recursive call in this factorial code compute the final result for input n?
public class MathUtil {
public static int factorial(int n) {
if (n == 0) {
return 1;
}
// Recursive call multiplies by the factorial of the smaller problem
return n * factorial(n - 1);
}
}
It adds n repeatedly until reaching 0.
It sorts numbers from 1 to n and returns the last value.
It divides n by 2 each call until reaching 1.
It multiplies n by the result of factorial(n - 1) until the base case.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on understanding how recursive calls build up the final result. Recursive factorial demonstrates how solutions to smaller problems combine to solve larger ones. In this implementation, each recursive call computes n! by multiplying n with the factorial of (n-1), following the mathematical definition n! = n × (n-1)!. Choice B is correct because it accurately describes the recursive process: each call multiplies the current value n by the result of factorial(n-1), continuing until reaching the base case where factorial(0) returns 1. Choice A is incorrect because factorial involves multiplication, not addition, and the recursive structure multiplies values rather than adding them. To help students: Trace through a concrete example like factorial(4), showing the call stack and how return values propagate back up. Emphasize how each recursive call waits for the result of the next call before completing its multiplication and returning.
In this recursive factorial method, what is the base case that prevents infinite recursion?
public class MathUtil {
// Computes n! for n >= 0
public static int factorial(int n) {
// Base case
if (n == 0) {
return 1;
}
// Recursive step: n! = n * (n-1)!
return n * factorial(n - 1);
}
}
When n == 0, call factorial(n) again.
When n < 0, return 1.
When n == 0, return 1.
When n == 1, return 0.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on identifying base cases in simple recursive functions. The base case in recursion provides the termination condition and the simplest solution that doesn't require further recursive calls. In this factorial implementation, the base case if (n == 0) return 1 correctly implements the mathematical definition that 0! = 1, which also serves as the stopping point for the recursive calls. Choice C is correct because when n equals 0, the method returns 1, which both matches the mathematical definition of factorial and prevents further recursive calls. Choice D is incorrect because calling factorial(n) again when n == 0 would create infinite recursion, as the parameter wouldn't decrease. To help students: Explain that base cases must move toward termination and provide concrete values. Use the mathematical definition of factorial to justify why 0! = 1, and trace through small examples like 3! to show how recursion eventually reaches the base case.
How does this merge sort handle the input array in recursive steps before calling merge?
public class Sorter {
// Sorts nums<u>left..right</u> by recursively sorting subranges
public static void mergeSort(int[] nums, int left, int right) {
if (left >= right) {
return;
}
int mid = (left + right) / 2;
// Recursive step: split into two index ranges
mergeSort(nums, left, mid);
mergeSort(nums, mid + 1, right);
// Merge step combines sorted ranges
merge(nums, left, mid, right);
}
private static void merge(int[] nums, int left, int mid, int right) {
int[] temp = new int<u>right - left + 1</u>;
int i = left, j = mid + 1, k = 0;
while (i <= mid && j <= right) {
temp<u>k++</u> = (nums<u>i</u> <= nums<u>j</u>) ? nums<u>i++</u> : nums<u>j++</u>;
}
while (i <= mid) temp<u>k++</u> = nums<u>i++</u>;
while (j <= right) temp<u>k++</u> = nums<u>j++</u>;
for (int t = 0; t < temp.length; t++) nums<u>left + t</u> = temp<u>t</u>;
}
}
It uses the same mid value for all recursive calls.
It removes elements from the array so recursion terminates sooner.
It copies the entire array into two new arrays at each call.
It updates left/right to work on smaller index ranges.
Explanation
This question tests AP Computer Science A skills on recursive searching and sorting algorithms, focusing on how merge sort manages array segments during recursion. Efficient sorting algorithms minimize memory usage by working with index ranges rather than creating array copies. In this merge sort implementation, the original array is passed to all recursive calls, but each call operates on a specific segment defined by the left and right indices, effectively dividing the work without copying data. Choice B is correct because it accurately describes how the algorithm updates the left and right parameters to define progressively smaller index ranges, allowing each recursive call to focus on its assigned portion of the array. Choice A is incorrect because copying the entire array at each recursive call would be extremely inefficient and unnecessary when index boundaries suffice. To help students: Use diagrams showing the same array with different colored segments representing the ranges each recursive call handles. Emphasize that changing indices is much more efficient than copying array elements, especially for large datasets.