diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml deleted file mode 100644 index b0c2395..0000000 --- a/.github/workflows/ci.yml +++ /dev/null @@ -1,28 +0,0 @@ -name: ci -on: - push: - branches: - - master -permissions: - contents: write -jobs: - deploy: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: Configure Git Credentials - run: | - git config user.name github-actions[bot] - git config user.email 41898282+github-actions[bot]@users.noreply.github.com - - uses: actions/setup-python@v5 - with: - python-version: 3.x - - run: echo "cache_id=$(date --utc '+%V')" >> $GITHUB_ENV - - uses: actions/cache@v4 - with: - key: mkdocs-material-${{ env.cache_id }} - path: .cache - restore-keys: | - mkdocs-material- - - run: pip install mkdocs-material - - run: mkdocs gh-deploy --force diff --git a/.gitignore b/.gitignore deleted file mode 100644 index 3b0dc41..0000000 --- a/.gitignore +++ /dev/null @@ -1,10 +0,0 @@ -.DS_Store -*.aux -*.out -*.log -*.synctex.gz -*.toc -_minted-* - -# MkDocs' folder -.cache diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/404.html b/404.html new file mode 100644 index 0000000..4bac7e0 --- /dev/null +++ b/404.html @@ -0,0 +1,586 @@ + + + +
+ + + + + + + + + + + + + + +Editor: Kadir Emre Oto
+Reviewers: Muhammed Burak Buğrul, Tahsin Enes Kuru
+It may be necessary to determine if an array or solution set contains a specific data, and we call this finding proccess searching. In this article, three most common search algorithms will be discussed: linear search, binary search, and ternary search.
+This visualization may help you understand how the search algorithms work.
+Simplest search algorithm is linear search, also know as sequential search. In this technique, all elements in the collection of the data is checked one by one, if any element matches, algorithm returns the index; otherwise, it returns \(-1\).
+Its time complexity is \(\mathcal{O}(N)\).
+
+We know linear search is quite a slow algorithm because it compares each element of the set with search key, and there is a high-speed searching technique for sorted data instead of linear search, which is binary search. After each comparison, the algorithm eliminates half of the data using the sorting property.
+We can also use binary search on increasing functions in the same way.
+\[ +\begin{align*} +T(N) &= T\left(\tfrac{N}{2}\right) + \mathcal{O}(1) \\ +T(N) &= \mathcal{O}(\log N) +\end{align*} +\]
+
+Suppose that we have a unimodal function, \(f(x)\), on an interval \([l, r]\), and we are asked to find the local minimum or the local maximum value of the function according to the behavior of it.
+There are two types of unimodal functions:
+The function, \(f(x)\) strictly increases for \(x \leq m\), reaches a global maximum at \(x = m\), and then strictly decreases for \(m \leq x\). There are no other local maxima.
+The function, \(f(x)\) strictly decreases for \(x \leq m\), reaches a global minimum at \(x = m\), and then strictly increases for \(m \leq x\). There are no other local minima.
+In this document, we will implement the first type of unimodal function, and the second one can be solved using the same logic.
+\(m_1\) and \(m_2\) can be selected by \(m_1 = l + \frac{r-l}{3}\) and \(m_2 = r - \frac{r-l}{3}\) to avoid increasing the time complexity.
+\[ +\begin{align*} +T(N) &= T\left(2 \cdot \tfrac{N}{3}\right) + \mathcal{O}(1) \\ +T(N) &= \mathcal{O}(\log N) +\end{align*} +\]
+
+Sorting algorithms are used to put the elements of an array in a certain order according to the comparison operator. Numerical order or lexicographical orders are the most common ones, and there are a large number of sorting algorithms, but we discuss four of them:
+For a better understanding, you are strongly recommended to go into this visualization site after reading the topics.
+Think that you are playing a card game and want to sort them before the game. Your sorting strategy is simple: you have already sorted some part and every time you pick up the next card from unsorted part, you insert it into the correct place in sorted part. After you apply this process to all cards, the whole deck would be sorted.
+This is the basic idea for sorting an array. We assume that the first element of the array is the sorted part, and other elements are in the unsorted part. Now, we choose the leftmost element of the unsorted part, and put it into the sorted part. In this way the left part of the array always remains sorted after every iteration, and when no element is left in the unsorted part, the array will be sorted.
+Merge Sort is one of the fastest sorting algorithms that uses Divide and Conquer paradigm. The algorithm divides the array into two halves, solves each part recursively using same sorting function and combines them in linear time by selecting the smallest value of the arrays every time.
+\[ +\begin{align*} +T(N) &= T\left(\tfrac{N}{2}\right) + \mathcal{O}(N) \\ +T(N) &= \mathcal{O}(N \cdot \log N) +\end{align*} +\]
+Quick Sort is also a Divide and Conquer algorithm. The algorithm chooses an element from the array as a pivot and partitions the array around it. Partitioning is arranging the array that satisfies those: the pivot should be put to its correct place, all smaller values should be placed before the pivot, and all greater values should be placed after the pivot. The partitioning can be done in linear time, and after the partitioning, we can use the same sorting function to solve the left part of the pivot and the right part of the pivot recursively.
+If the sellected pivot cannot divide the array uniformly after the partitioning, the time complexity can reach \(\mathcal{O}(n ^ 2)\) like insertion sort. To avoid this, the pivot can generally be picked randomly.
+\[ +\begin{align*} +T(N) &= T\left(\tfrac{N}{10}\right) + T\left(9 \cdot \tfrac{N}{10}\right) + \mathcal{O}(N) \\ +T(N) &= \mathcal{O}(N \cdot \log N) +\end{align*} +\]
+Quick Sort and Merge Sort are comparison-based sorting algorithms and cannot run better than \(\mathcal{O}(N \log N)\). However, Radix Sort works in linear time (\(\mathcal{O}(N + K)\), where \(K\) is \(\log(\max(ar))\)).
+\[ +\begin{align*} +T(N) &= \mathcal{O}(N) +\end{align*} +\]
+Quickselect is a selection algorithm that finds the \(k^{th}\) smallest element in an unordered list. The algorithm is closely related to QuickSort in partitioning stage; however, instead of recurring for both sides, it recurs only for the part that contains the \(k^{th}\) smallest element.
+++Note that this algorithm is fast in practice, but has poor worst-case performance, like quicksort. However, it still performs better on average than other algorithms that find the \(k^{th}\) smallest element in \(\mathcal{O}(n)\) in the worst case.
+
Divide and Conquer is a well-known paradigm that breaks up the problem into several parts, solves each part independently, and finally combines the solutions to the subproblems into the overall solution. Because each subproblem is solved recursively, they should be the smaller versions of the original problem; and the problem must have a base case to end the recursion.
+Some example algorithms that use divide and conquer technique:
+
+