Monthly Archives: March 2016

What algorithms every software engineer should know?


Answer on @Quora by Gabriel Cojocaru to What algorithms every software engineer should know?  1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.  This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).  2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".  Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.  3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.  In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.  4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.  For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.   5. Backtracking – is a general   wikipedia.org  algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.  The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.  Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

Answer by Gabriel Cojocaru:

http://www.codingeek.org

1. Divide and Conquer – A divide and conquer algorithm works by recursively breaking down a problem into two or more sub-problems of the same (or related) type (divide), until these become simple enough to be solved directly (conquer). The solutions to the sub-problems are then combined to give a solution to the original problem.

This divide and conquer technique is the basis of efficient algorithms for all kinds of problems, such as   (e.g., QuickSort , Merge Sort ), multiplying large numbers (e.g. Karatsuba), syntactic analysis (e.g., top-down parsers), and computing the discrete Fourier transform (FFTs).

2. Dynamic Programming –  is a method for solving a complex problem by breaking it down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions – ideally, using a memory-based data structure. The next time the same subproblem occurs, instead of recomputing its solution, one simply looks up the previously computed solution, thereby saving computation time at the expense of a (hopefully) modest expenditure in storage space. (Each of the subproblem solutions is indexed in some way, typically based on the values of its input parameters, so as to facilitate its lookup.) The technique of storing solutions to subproblems instead of recomputing them is called "memoization".

Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a Greedy Algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as Minimum Spanning Tree ) are proven to lead to the optimal solution.

3. Graph Algorithms – Graphs can be used to model many types of relations and processes in physical, biological,social and information systems. Many practical problems can be represented by graphs.

In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in travel, biology, computer chip design, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.

4. Greedy Algorithm – is an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. In many problems, a greedy strategy does not in general produce an optimal solution, but nonetheless a greedy heuristic may yield locally optimal solutions that approximate a global optimal solution in a reasonable time.

For example, a greedy strategy for the Travelling Salesman Problem  (which is of a high computational complexity) is the following heuristic: "At each stage visit an unvisited city nearest to the current city". This heuristic need not find a best solution, but terminates in a reasonable number of steps; finding an optimal solution typically requires unreasonably many steps. In mathematical optimization, greedy algorithms solve combinatorial problems having the properties of matroids.

5. Backtracking – is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.

The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight chess queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned.

Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.

What algorithms every software engineer should know?

Advertisements

Leave a comment

Filed under Life

How can I learn to think like a functional programmer?


Answer on @Quora by @m00nlight223 to How can I learn to think like a functional programmer?  To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example   1(defn add-to-trie [trie x] 2  (assoc-in trie x (merge (get-in trie x) {:val x :terminal true}))) 3  4(defn build-trie [coll trie] 5  "Build a trie over the values in the specified seq coll." 6  (reduce add-to-trie trie coll))    The above Clojure code use an collection to store some words and use reduce  to build a trie. This is perfectly functional style without state change of variable.   You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as group-by  in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

Answer by Yushi Wang:

When I first learn functional programming, it seems weird to program without variables and the operation to update variables. But in fact, 90% of the time, you can use functional style to programming without the change of state, and at most time, it is much better in the functional style.

But when you program in functional language more and more, it will show that it is more nature to program in this style(with high-order functions like map, filter, foldl, foldr etc.).

To me,  one key experience to change from imperative style to functional style is to abstract the operation to handle the data-structure, and use high-order function like map or fold/reduce to handle the data-structure like the following example

(defn add-to-trie [trie x]
  (assoc-in trie x (merge (get-in trie x) {:val x :terminal true})))

(defn build-trie [coll trie]
  "Build a trie over the values in the specified seq coll."
  (reduce add-to-trie trie coll))

The above Clojure code use an collection to store some words and use

reduce

to build a trie. This is perfectly functional style without state change of variable.

You may also read some good books on functional programming(I highl recommend SICP which also has free edition online and teaching videos). Or you can do some challenge programming in one functional language, for this 4clojure – Welcome! is a perfect place for clojure programmer, it require you to implement some programming tasks(such as

group-by

in the core-language of Clojure), after you finish more and more of the exercise, I think you will be better and better in programming in thinking in a functional style.

How can I learn to think like a functional programmer?

Leave a comment

Filed under Life

How many people actually use JavaScript as a functional programming language?


Answer on @Quora by Vladislav Zorov to How many people actually use JavaScript as a functional programming language?  The big exception to the above are people writing reactive JavaScript, using libraries like RxJS or Bacon. They are writing functional JavaScript, and no amount of nitpicking can deny that (except if you bust out the "static type system" argument, which IMHO isn't a very good argument).  P.S. "Imperative" is not the opposite of "functional"; "imperative" is the opposite of "declarative", and "functional" is the opposite of "procedural". JS can be, with the above in mind, in the "imperative-functional" quadrant, like Lisp, and unlike Haskell (which is "declarative-functional"). But it's normally "imperative-procedural".

Answer by Vladislav Zorov:

As a functional programming language? I suppose almost nobody. Sure, a lot of people pass functions around, but this alone doesn't make it functional programming. If you want FP, you have to at least get a library like Underscore, seeing how JavaScript doesn't actually support FP out-of-the-box (it only has things like map, filter, etc., but these are present in any modern language, even Java).

The big exception to the above are people writing reactive JavaScript, using libraries like RxJS or Bacon. They are writing functional JavaScript, and no amount of nitpicking can deny that (except if you bust out the "static type system" argument, which IMHO isn't a very good argument).

P.S. "Imperative" is not the opposite of "functional"; "imperative" is the opposite of "declarative", and "functional" is the opposite of "procedural". JS can be, with the above in mind, in the "imperative-functional" quadrant, like Lisp, and unlike Haskell (which is "declarative-functional"). But it's normally "imperative-procedural".

How many people actually use JavaScript as a functional programming language?

Leave a comment

Filed under Life

How do I burn a motherboard by writing a C program?


Answer on @Quora by Manishi Barosee to How do I burn a motherboard by writing a C program?  1.Write a program that makes the motherboard think that you are updating its BIOS. This might require some reverse engineering of the official BIOS update software from the manufacturer.  2.When the program runs, don't update the bios. Erase the BIOS chip instead.  3.You will now have an unbootable motherboard.  4.Enjoy l33t haxxor status, make Defcon talk.

Answer by Manishi Barosee:

Easy –

  1. Write a program that makes the motherboard think that you are updating its BIOS. This might require some reverse engineering of the official BIOS update software from the manufacturer.
  2. When the program runs, don't update the bios. Erase the BIOS chip instead.
  3. You will now have an unbootable motherboard.
  4. Enjoy l33t haxxor status, make Defcon talk.

How do I burn a motherboard by writing a C program?

Leave a comment

Filed under Life

What is the most interesting line of code you have written?


Answer on @Quora by Vladislav Zorov to What is the most interesting line of code you have written?  Some background: Lisp uses singly-linked lists –  cons  constructs a pair from two values; for a list, the "head" is the value you want to store, while the "tail" is the rest of the list. To make the tail, the function just calls itself recursively (but makes sure all children end up as a single list, not a list-of-lists; apply  works in the same way as in JavaScript, i.e. allows you to call a function that expects multiple arguments with a list instead).  Just for fun, here's an iterator-based version in Python 3: 1def flatten(node): 2    yield node.value 3    for child in node.children: 4        yield from flatten(child)

Answer by Vladislav Zorov:

I don't normally write interesting lines; if a line is interesting, it should be refactored ­čśÇ

But, specially for this answer:

(define (flatten n)
  (cons (node-value n) (apply append (map flatten (node-children n)))))

Given:

#lang racket

(struct node (value children))

(define tree
  (node
   'root
   (list
    (node 'child_1 (list (node 'child_1_1 '())))
    (node 'child_2 '())
    (node 'child_3 (list (node 'child_3_1 '()))))))

…it produces:

> (flatten tree)
'(root child_1 child_1_1 child_2 child_3 child_3_1)

A list version of the tree, from a pre-order traversal ­čÖé

But, even here, this

(apply append (map ... ))

is an instance of something more general; namely

flatmap

:

(define (flatmap proc lst)
  (apply append (map proc lst)))

(define (flatten n)
  (cons (node-value n) (flatmap flatten (node-children n))))

Some background: Lisp uses singly-linked lists –

cons

constructs a pair from two values; for a list, the "head" is the value you want to store, while the "tail" is the rest of the list. To make the tail, the function just calls itself recursively (but makes sure all children end up as a single list, not a list-of-lists;

apply

works in the same way as in JavaScript, i.e. allows you to call a function that expects multiple arguments with a list instead).

Just for fun, here's an iterator-based version in Python 3:

def flatten(node):
    yield node.value
    for child in node.children:
        yield from flatten(child)

You can take elements one by one:

>>> g = flatten(tree)
>>> next(g)
'root'
>>> next(g)
'child_1'
>>> next(g)
'child_1_1'
>>> next(g)
'child_2'

…or build a list (the

list

constructor will eat anything iterable):

>>> list(flatten(tree))
['root', 'child_1', 'child_1_1', 'child_2', 'child_3', 'child_3_1']

If you want this streaming behavior in Racket, you can either change the

cons

to

stream-cons

and

append

to

stream-append

, or change

#lang racket

to

#lang lazy

and everything starts streaming by default (because in languages with lazy evaluation, nothing is evaluated until you need it).

What is the most interesting line of code you have written?

Leave a comment

Filed under Life

How does a car engine work?


The spark ignites the mixture, causing it to basically explode, pushing the piston back with great force. This is called the "power stroke" and it's where all the power driving the engine comes from. The combustion of the gasoline inside the cylinder is what provides the energy that moves the car (hence 'internal combustion engine').  After that, another valve opens, and the piston drops, pushing the hot gasses out to the exhaust pipe. Then the valve to let air in opens and the whole process starts again.
 
Each of the pistons is connected to a shaft, called the "crankshaft". When each one is pushed out, the force of it turns the crankshaft, causing it to rotate. That rotational motion turns gears and shafts which spin the wheels. Since a car engine has multiple cylinders in it (four is most common, but they can have more or less), they're timed so that, while one is firing, the others are in different parts of the cycle. This whole cycle happens many, many times every second, and so can turn the wheels at high speeds.

Answer by Geoffrey Widdison:

Contained explosions.
 
The engine block on a car is just a big chunk of solid metal with big cylindrical holes drilled into it. Each hole (or 'cylinder') has a piston in it, which is just a metal plug which can slide smoothly up and down the cylinder. the inside of the cylinder and the outside of the piston are smooth and oiled so the piston can move with very little friction.
 
First, the piston is drawn back all the way, drawing in air, and making the space in the cylinder as big as possible, then a valve closes, trapping the air inside. Then the fuel injector sprays a little bit of gasoline into the cylinder, which quickly vaporized. The piston pushes down, compressing the mixture of fuel and gas, then the spark plug creates a spark in the cylinder.
 
The spark ignites the mixture, causing it to basically explode, pushing the piston back with great force. This is called the "power stroke" and it's where all the power driving the engine comes from. The combustion of the gasoline inside the cylinder is what provides the energy that moves the car (hence 'internal combustion engine').  After that, another valve opens, and the piston drops, pushing the hot gasses out to the exhaust pipe. Then the valve to let air in opens and the whole process starts again.
 
Each of the pistons is connected to a shaft, called the "crankshaft". When each one is pushed out, the force of it turns the crankshaft, causing it to rotate. That rotational motion turns gears and shafts which spin the wheels. Since a car engine has multiple cylinders in it (four is most common, but they can have more or less), they're timed so that, while one is firing, the others are in different parts of the cycle. This whole cycle happens many, many times every second, and so can turn the wheels at high speeds.
 
The practical design of a working vehicle is, of course, more complex, but that the essence of how every gas powered vehicle operates.

How does a car engine work?

Leave a comment

Filed under Life

Does the use of regular expressions go beyond simple text matching?


They're called "regular expressions" because they're based on regular grammars which are one of several types of grammar-as-mathematical-object discovered by linguists and computers scientists in the last 50 years.

Answer by Phil Jones:

They're called "regular expressions" because they're based on regular grammars which are one of several types of grammar-as-mathematical-object discovered by linguists and computers scientists in the last 50 years.
Regular grammars are useful in that you can very easily describe matches for a whole range of patterns in any kind of sequence of tokens. Normally we use them to match text, but I've seen them being used, for example, to match sequences of simple strokes (up, down, left, right) into hand-written characters etc.
They're very powerful for a very concise (and fairly easy to understand) pattern matching language.
However they aren't powerful enough to match everything we're interested in. In particular, they aren't powerful enough to pattern match things that have a recursive structure. For example, you can't use them to parse code written in a programming language that has nested blocks. For this, you need a different kind of parser based on a different type of grammar.

Does the use of regular expressions go beyond simple text matching?

Leave a comment

Filed under Life