## How does the Chinese Remainder Theorem work?

How does the Chinese Remainder Theorem work? by Sujith Vijay

Let  us start with a totally different problem. Suppose you want to find the  polynomial of least degree satisfying $p(1)=8, p(2) = 13$ and $p(3) = 34$.

We write $p(x)=p_1(x) + p_2(x) + p_3(x)$ where

$p_1(1)=8 \quad p_1(2) = 0 \; \; \quad p_1(3) = 0$
$p_2(1)=0 \quad p_2(2) = 13 \quad p_2(3) = 0$
$p_3(1)=0 \quad p_3(2) = 0 \; \; \quad p_3(3) = 34$

These subproblems are easier. Clearly,

$p_1(x)=c_1(x-2)(x-3)$
$p_2(x)=c_2(x-1)(x-3)$
$p_3(x)=c_3(x-1)(x-2)$

Using  the data given, we get $c_1=4, c_2=-13, c_3=17$ and  $p(x)=8x^2-19x+19$. The technique we used is called Lagrange interpolation.

Chinese Remainder Theorem works in a very similar way. Consider the problem you have posted.

$x \equiv 3 \; \rm { mod } \; 5$
$x \equiv 5 \; \rm { mod } \; 7$
$x \equiv 1 \; \rm { mod } \; 9$

We write $x = y_1 + y_2 + y_3$ where

$y_1 \equiv 3 \; {\rm { mod }} \; 5 \quad y_1 \equiv 0 \; {\rm { mod }} \; 7 \quad y_1 \equiv 0 \; {\rm { mod }} \; 9$

$y_2 \equiv 0 \; {\rm { mod }} \; 5 \quad y_2 \equiv 5 \; {\rm { mod }} \; 7 \quad y_2 \equiv 0 \; {\rm { mod }} \; 9$

$y_3 \equiv 0 \; {\rm { mod }} \; 5 \quad y_3 \equiv 0 \; {\rm { mod }} \; 7 \quad y_3 \equiv 1 \; {\rm { mod }} \; 9$

Once again, the subproblems are easier. We have,

$y_1 = 63z_1 \equiv 3 \; \rm { mod } \; 5$
$y_2 = 45z_2 \equiv 5 \; \rm { mod } \; 7$
$y_3 = 35z_3 \equiv 1 \; \rm { mod } \; 9$

This solves to $z_1 = 1, z_2 = 4, z_3 = 8$

$x \equiv 63 + 180 + 280 \equiv 208 \; \rm { mod } \; 315$

Thus the smallest positive integer satisfying the given set of congruences is $208$.

How does the Chinese Remainder Theorem work?

Filed under Life

## What is a permutation?

What is a permutation? by Mike Kayser

Informally: A reordering.

If I have three objects arranged in a line on the table, there are 3!=6 ways to order them in that line. (Three ways to pick which one is on the left, times 2 ways to pick which one is in the middle, times exactly one way to pick the one on the right).

Formally: a bijection from a set to itself.

• A bijection in general is just a map from $X$ to $Y$ where:
(1) every element of $Y$ got mapped to by some element of $X$ (surjective), and
(2) each element of $Y$ doesn't get mapped to by more than one element of $X$ (injective).
• That's the same as saying that every element of $Y$ got mapped to by one and only one element of $X$.
• In a bijection $f$ from $X$ to $X$, you can think of $f(x)$ as meaning roughly "where do I put $x$?" and as $f(x)=y$ as meaning "I put $x$ in $y$'s old position."
• The bijective property we mentioned above is equivalent to ensuring that every "old position" is the "new home" for exactly one object in the set.

What is a permutation?

Filed under Life

## What is a permutation?

What is a permutation? by Mike Kayser  Informally: A reordering.  If I have three objects arranged in a line on the table, there are 3!=6 ways to order them in that line. (Three ways to pick which one is on the left, times 2 ways to pick which one is in the middle, times exactly one way to pick the one on the right).  Formally: a bijection from a set to itself.  •A bijection in general is just a map from X X to Y Y where:  (1) every element of Y Y got mapped to by some element of X X (surjective), and  (2) each element of Y Y doesn't get mapped to by more than one element of X X (injective). •That's the same as saying that every element of Y Y got mapped to by one and only one element of X X. •In a bijection f f from X X to X X, you can think of f(x) f(x) as meaning roughly "where do I put x x?" and as f(x)=y f(x)=y as meaning "I put x x in y y's old position." •The bijective property we mentioned above is equivalent to ensuring that every "old position" is the "new home" for exactly one object in the set.

Informally: A reordering.

If I have three objects arranged in a line on the table, there are 3!=6 ways to order them in that line. (Three ways to pick which one is on the left, times 2 ways to pick which one is in the middle, times exactly one way to pick the one on the right).

Formally: a bijection from a set to itself.

• A bijection in general is just a map from $X$ to $Y$ where:
(1) every element of $Y$ got mapped to by some element of $X$ (surjective), and
(2) each element of $Y$ doesn't get mapped to by more than one element of $X$ (injective).
• That's the same as saying that every element of $Y$ got mapped to by one and only one element of $X$.
• In a bijection $f$ from $X$ to $X$, you can think of $f(x)$ as meaning roughly "where do I put $x$?" and as $f(x)=y$ as meaning "I put $x$ in $y$'s old position."
• The bijective property we mentioned above is equivalent to ensuring that every "old position" is the "new home" for exactly one object in the set.

What is a permutation?

Filed under Life

## What function is Arctic Monkeys’ album cover?

What function is Arctic Monkeys' album cover? by Amey Chaware   This is most certainly an amplitude modulated sine wave. In amplitude modulation, you change the amplitude of a predetermined carrier wave according to the message signal. In this case, both appear to be sine curves. For exact equation, you can refer to the wiki link: https://en.m.wikipedia.org/wiki/&#8230;

This is most certainly an amplitude modulated sine wave. In amplitude modulation, you change the amplitude of a predetermined carrier wave according to the message signal. In this case, both appear to be sine curves.
For exact equation, you can refer to the wiki link:
https://en.m.wikipedia.org/wiki/Amplitude_modulation

What function is Arctic Monkeys' album cover?

Filed under Life

## What algorithms every software engineer should know?

Answer on @Quora by Gabriel Cojocaru to What algorithms every software engineer should know?  1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.  This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).  2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".  Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.  3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.  In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.  4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.  For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.   5. Backtracking – is a general   wikipedia.org  algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.  The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.  Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

http://www.codingeek.org

1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.

This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).

2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".

Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.

3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.

In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.

4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.

For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.

5. Backtracking – is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.

The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.

Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

What algorithms every software engineer should know?

Filed under Life

## How can I learn to think like a functional programmer?

Answer on @Quora by @m00nlight223 to How can I learn to think like a functional programmer?  To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example   1(defn add-to-trie [trie x] 2  (assoc-in trie x (merge (get-in trie x) {:val x :terminal true}))) 3  4(defn build-trie [coll trie] 5  "Build a trie over the values in the specified seq coll." 6  (reduce add-to-trie trie coll))    The above Clojure code use an collection to store some words and use reduce  to build a trie. This is perfectly functional style without state change of variable.   You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as group-by  in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

When I first learn functional programming, it seems weird to program without variables and the operation to update variables. But in fact, 90% of the time, you can use functional style to programming without the change of state, and at most time, it is much better in the functional style.

But when you program in functional language more and more, it will show that it is more nature to program in this style(with high-order functions like map, filter, foldl, foldr etc.).

To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example

(defn add-to-trie [trie x]
(assoc-in trie x (merge (get-in trie x) {:val x :terminal true})))

(defn build-trie [coll trie]
"Build a trie over the values in the specified seq coll."


The above Clojure code use an collection to store some words and use

reduce

to build a trie. This is perfectly functional style without state change of variable.

You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as

group-by

in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

How can I learn to think like a functional programmer?

Filed under Life

## How many people actually use JavaScript as a functional programming language?

Answer on @Quora by Vladislav Zorov to How many people actually use JavaScript as a functional programming language?  The big exception to the above are people writing reactive JavaScript, using libraries like RxJS or Bacon. They are writing functional JavaScript, and no amount of nitpicking can deny that (except if you bust out the "static type system" argument, which IMHO isn't a very good argument).  P.S. "Imperative" is not the opposite of "functional"; "imperative" is the opposite of "declarative", and "functional" is the opposite of "procedural". JS can be, with the above in mind, in the "imperative-functional" quadrant, like Lisp, and unlike Haskell (which is "declarative-functional"). But it's normally "imperative-procedural".