## What is a permutation?

What is a permutation? by Mike Kayser  Informally: A reordering.  If I have three objects arranged in a line on the table, there are 3!=6 ways to order them in that line. (Three ways to pick which one is on the left, times 2 ways to pick which one is in the middle, times exactly one way to pick the one on the right).  Formally: a bijection from a set to itself.  •A bijection in general is just a map from X X to Y Y where:  (1) every element of Y Y got mapped to by some element of X X (surjective), and  (2) each element of Y Y doesn't get mapped to by more than one element of X X (injective). •That's the same as saying that every element of Y Y got mapped to by one and only one element of X X. •In a bijection f f from X X to X X, you can think of f(x) f(x) as meaning roughly "where do I put x x?" and as f(x)=y f(x)=y as meaning "I put x x in y y's old position." •The bijective property we mentioned above is equivalent to ensuring that every "old position" is the "new home" for exactly one object in the set.

Answer by Mike Kayser:

Informally: A reordering.

If I have three objects arranged in a line on the table, there are 3!=6 ways to order them in that line. (Three ways to pick which one is on the left, times 2 ways to pick which one is in the middle, times exactly one way to pick the one on the right).

Formally: a bijection from a set to itself.

• A bijection in general is just a map from [math]X[/math] to [math]Y[/math] where:
(1) every element of [math]Y[/math] got mapped to by some element of [math]X[/math] (surjective), and
(2) each element of [math]Y[/math] doesn't get mapped to by more than one element of [math]X[/math] (injective).
• That's the same as saying that every element of [math]Y[/math] got mapped to by one and only one element of [math]X[/math].
• In a bijection [math]f[/math] from [math]X[/math] to [math]X[/math], you can think of [math]f(x)[/math] as meaning roughly "where do I put [math]x[/math]?" and as [math]f(x)=y[/math] as meaning "I put [math]x[/math] in [math]y[/math]'s old position."
• The bijective property we mentioned above is equivalent to ensuring that every "old position" is the "new home" for exactly one object in the set.

What is a permutation?

Filed under Life

## What function is Arctic Monkeys’ album cover?

What function is Arctic Monkeys' album cover? by Amey Chaware   This is most certainly an amplitude modulated sine wave. In amplitude modulation, you change the amplitude of a predetermined carrier wave according to the message signal. In this case, both appear to be sine curves. For exact equation, you can refer to the wiki link: https://en.m.wikipedia.org/wiki/&#8230;

Answer by Amey Chaware:

This is most certainly an amplitude modulated sine wave. In amplitude modulation, you change the amplitude of a predetermined carrier wave according to the message signal. In this case, both appear to be sine curves.
For exact equation, you can refer to the wiki link:
https://en.m.wikipedia.org/wiki/Amplitude_modulation

What function is Arctic Monkeys' album cover?

Filed under Life

## What algorithms every software engineer should know?

Answer on @Quora by Gabriel Cojocaru to What algorithms every software engineer should know?  1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.  This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).  2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".  Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.  3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.  In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.  4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.  For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.   5. Backtracking – is a general   wikipedia.org  algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.  The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.  Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

Answer by Gabriel Cojocaru:

http://www.codingeek.org

1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.

This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).

2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".

Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.

3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.

In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.

4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.

For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.

5. Backtracking – is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.

The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.

Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

What algorithms every software engineer should know?

Filed under Life

## How can I learn to think like a functional programmer?

Answer on @Quora by @m00nlight223 to How can I learn to think like a functional programmer?  To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example   1(defn add-to-trie [trie x] 2  (assoc-in trie x (merge (get-in trie x) {:val x :terminal true}))) 3  4(defn build-trie [coll trie] 5  "Build a trie over the values in the specified seq coll." 6  (reduce add-to-trie trie coll))    The above Clojure code use an collection to store some words and use reduce  to build a trie. This is perfectly functional style without state change of variable.   You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as group-by  in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

Answer by Yushi Wang:

When I first learn functional programming, it seems weird to program without variables and the operation to update variables. But in fact, 90% of the time, you can use functional style to programming without the change of state, and at most time, it is much better in the functional style.

But when you program in functional language more and more, it will show that it is more nature to program in this style(with high-order functions like map, filter, foldl, foldr etc.).

To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example

```(defn add-to-trie [trie x]
(assoc-in trie x (merge (get-in trie x) {:val x :terminal true})))

(defn build-trie [coll trie]
"Build a trie over the values in the specified seq coll."
(reduce add-to-trie trie coll))
```

The above Clojure code use an collection to store some words and use

`reduce`

to build a trie. This is perfectly functional style without state change of variable.

You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as

`group-by`

in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

How can I learn to think like a functional programmer?

Filed under Life

## How many people actually use JavaScript as a functional programming language?

Answer on @Quora by Vladislav Zorov to How many people actually use JavaScript as a functional programming language?  The big exception to the above are people writing reactive JavaScript, using libraries like RxJS or Bacon. They are writing functional JavaScript, and no amount of nitpicking can deny that (except if you bust out the "static type system" argument, which IMHO isn't a very good argument).  P.S. "Imperative" is not the opposite of "functional"; "imperative" is the opposite of "declarative", and "functional" is the opposite of "procedural". JS can be, with the above in mind, in the "imperative-functional" quadrant, like Lisp, and unlike Haskell (which is "declarative-functional"). But it's normally "imperative-procedural".

As a functional programming language? I suppose almost nobody. Sure, a lot of people pass functions around, but this alone doesn't make it functional programming. If you want FP, you have to at least get a library like Underscore, seeing how JavaScript doesn't actually support FP out-of-the-box (it only has things like map, filter, etc., but these are present in any modern language, even Java).

The big exception to the above are people writing reactive JavaScript, using libraries like RxJS or Bacon. They are writing functional JavaScript, and no amount of nitpicking can deny that (except if you bust out the "static type system" argument, which IMHO isn't a very good argument).

P.S. "Imperative" is not the opposite of "functional"; "imperative" is the opposite of "declarative", and "functional" is the opposite of "procedural". JS can be, with the above in mind, in the "imperative-functional" quadrant, like Lisp, and unlike Haskell (which is "declarative-functional"). But it's normally "imperative-procedural".

How many people actually use JavaScript as a functional programming language?

Filed under Life

## How do I burn a motherboard by writing a C program?

Answer on @Quora by Manishi Barosee to How do I burn a motherboard by writing a C program?  1.Write a program that makes the motherboard think that you are updating its BIOS. This might require some reverse engineering of the official BIOS update software from the manufacturer.  2.When the program runs, don't update the bios. Erase the BIOS chip instead.  3.You will now have an unbootable motherboard.  4.Enjoy l33t haxxor status, make Defcon talk.

Answer by Manishi Barosee:

Easy –

1. Write a program that makes the motherboard think that you are updating its BIOS. This might require some reverse engineering of the official BIOS update software from the manufacturer.
2. When the program runs, don't update the bios. Erase the BIOS chip instead.
3. You will now have an unbootable motherboard.
4. Enjoy l33t haxxor status, make Defcon talk.

How do I burn a motherboard by writing a C program?

Filed under Life

## What is the most interesting line of code you have written?

Answer on @Quora by Vladislav Zorov to What is the most interesting line of code you have written?  Some background: Lisp uses singly-linked lists –  cons  constructs a pair from two values; for a list, the "head" is the value you want to store, while the "tail" is the rest of the list. To make the tail, the function just calls itself recursively (but makes sure all children end up as a single list, not a list-of-lists; apply  works in the same way as in JavaScript, i.e. allows you to call a function that expects multiple arguments with a list instead).  Just for fun, here's an iterator-based version in Python 3: 1def flatten(node): 2    yield node.value 3    for child in node.children: 4        yield from flatten(child)

I don't normally write interesting lines; if a line is interesting, it should be refactored 😀

But, specially for this answer:

```(define (flatten n)
(cons (node-value n) (apply append (map flatten (node-children n)))))
```

Given:

```#lang racket

(struct node (value children))

(define tree
(node
'root
(list
(node 'child_1 (list (node 'child_1_1 '())))
(node 'child_2 '())
(node 'child_3 (list (node 'child_3_1 '()))))))
```

…it produces:

```> (flatten tree)
'(root child_1 child_1_1 child_2 child_3 child_3_1)
```

A list version of the tree, from a pre-order traversal 🙂

But, even here, this

`(apply append (map ... ))`

is an instance of something more general; namely

`flatmap`

:

```(define (flatmap proc lst)
(apply append (map proc lst)))

(define (flatten n)
(cons (node-value n) (flatmap flatten (node-children n))))
```

Some background: Lisp uses singly-linked lists –

`cons`

constructs a pair from two values; for a list, the "head" is the value you want to store, while the "tail" is the rest of the list. To make the tail, the function just calls itself recursively (but makes sure all children end up as a single list, not a list-of-lists;

`apply`

works in the same way as in JavaScript, i.e. allows you to call a function that expects multiple arguments with a list instead).

Just for fun, here's an iterator-based version in Python 3:

```def flatten(node):
yield node.value
for child in node.children:
yield from flatten(child)
```

You can take elements one by one:

```>>> g = flatten(tree)
>>> next(g)
'root'
>>> next(g)
'child_1'
>>> next(g)
'child_1_1'
>>> next(g)
'child_2'
```

…or build a list (the

`list`

constructor will eat anything iterable):

```>>> list(flatten(tree))
['root', 'child_1', 'child_1_1', 'child_2', 'child_3', 'child_3_1']
```

If you want this streaming behavior in Racket, you can either change the

`cons`

to

`stream-cons`

and

`append`

to

`stream-append`

, or change

`#lang racket`

to

`#lang lazy`

and everything starts streaming by default (because in languages with lazy evaluation, nothing is evaluated until you need it).

What is the most interesting line of code you have written?