Monthly Archives: January 2016

Why is i^2 = -1?


Answer by Adam Catto:

I like to think geometrically:

Suppose we have two vectors in the complex plane, A and B.   The horizontal axis is the real line and the vertical axis is the imaginary line. The multiplication of two vectors, AB, in the complex plane has two major properties: length and orientation (angle).  The length is just the dot product between the vectors (aka the product of their lengths), and the angle between the new vector and the positive horizontal is the sum of the angles of the vectors. Note that i is a unit vector on the imaginary line.

let A = i, B = i. Thus, AB = [math]i^2[/math]. Since i is of unit length, the norm of it is just 1. [math]1\times 1 = 1[/math], so [math]i^2[/math] is of unit length.

Now, what's the angle between [math]i^2[/math] and the positive real axis? Recall that [math]\theta(AB) = \theta(A) + \theta(B)[/math], where [math]\theta[/math] is an angle.  Since i is on the imaginary line, and the imaginary line is orthogonal to the real axis, the angle between i and the positive real axis is [math]\frac{\pi}{2}[/math].

Since A = B = i, [math]\theta(AB) = \theta(A) + \theta(B) = \theta(i) + \theta(i) = 2\theta(i) = 2\frac{\pi}{2} = \pi[/math], so [math]i^2[/math] is oriented along the negative real axis, or is of negative direction. Since it is of unit length, and is oriented towards the negative real direction, [math]i^2 = -1[/math]. I'll post a picture with a plot that makes this visualization easier, when I get the chance.

EDIT: here's the plot:

Why is i^2 = -1?

Advertisements

Leave a comment

Filed under Life

What is an intuitive explanation for rank of a matrix?


Answer by John Salvatore:

A matrix is a collection of vectors. You can intuitively imagine a vector as a point in space. If you want, you can draw a line from the origin to that point, and put a little arrow on the tip.

Normally, we imagine vectors in 2 or 3 dimensions:

Above, you see the vector [7, 3, 5]. Intuitively, you can imagine an arrow that goes from the origin to the point that's 7 units in the x direction, then 3 units in the y direction, and 5 units in the z direction.

Now imagine just two of these 3D vectors. You can imagine adding these vectors by taking the tip of one, and using it as the origin of the other. That would give you some new vector.

Above, we added the original vector [7, 3, 5]  (now purple) [-2, -1, 5] (blue), and got [5, 2, 10] (red).

You may notice that the new vector lies in the plane of the original two. It's no coincidence. If we scale the original vectors by some constant, we could shorten or increase their lengths. We could even scale them by a negative amount, and point them in the opposite direction. However, they'd still point in the same directions. When you add them, they'd still pick some new point on that same plane.

Above, I scaled the original vector [7, 3, 5] by 1/2, resulting in [3.5, 1.5, 2.5]. Though the new vector after addition is a different point [1.5, 0.5, 7.5], it still lies in the same plane.

Now, imagine you took all possible combinations of the two vectors. The set of all possible points you could reach by scaling and adding these two vectors (including zero of each) is the entire plane that passes through the two vectors and the origin.

When we do this, we are imagining the space that the vectors "span". In this case, it's a plane. Imagine the two vectors we started with were actually pointed in the same direction.

[7, 3, 5] and [-3.5, -1.5, -2.5] are two distinct vectors, but they point in the same direction. No amount of combining these two would ever escape the line they both lie on. Thus, the space "spanned" by the two vectors is a single line, rather than a plane.

In each case, we had two vectors, but the number of independent directions made the difference between spanning a plane vs a line. The "rank" of a matrix is the dimension of that space spanned by the vectors it contains.

If we put the two vectors [7, 3, 5]  and [-2, -1, 5] into a matrix:
[math]\begin{bmatrix}
7 & -2\\
3 & -1\\
5 & 5
\end{bmatrix}[/math]

the rank of the matrix is the dimension of the space that you get by taking all combinations of the vectors. We've already done that, and saw that the space spanned by [7, 3, 5] and [-2, -1, 5] was a plane. In this case, the rank is 2 (because a plane is 2 dimensional).

Let's put the two vectors [7, 3, 5] and [-3.5, -1.5, -2.5] into a matrix:

[math]\begin{bmatrix}
7 & -3.5\\
3 & -1.5\\
5 & -2.5
\end{bmatrix}[/math]

As we already saw, these two vectors span a line. The rank of this matrix would be the same idea: it's the dimension of the space you get by taking all combinations of the vectors. In this case, that space is just a line, so the rank is 1 (because a line is 1 dimensional).

The same idea applies in higher dimensions. It just gets harder to visualize intuitively. However, even in arbitrarily high dimensions, the rank of the matrix is the dimension of the space spanned by the vectors that make up the matrix.

What is an intuitive explanation for rank of a matrix?

Leave a comment

Filed under Life

What is the importance of determinants in linear algebra?


Answer by Sam Lichtenstein:

Short version: Yes, determinants are useful and important. No, they are not necessary for defining the basic notions of linear algebra, such linear independence and basis and eigenvector, or the concept of an invertible linear transformation (or matrix). They are also not necessary for proving most properties of these notions. But yes, I think a good course on linear algebra should still introduce them early on.

Long version: Determinants are a misunderstood beast. It's only natural: they are computed via an extremely ugly (to my eye) formula, or a recursive algorithm (expansion by minors), both of which involve annoying signs that can be difficult to remember. But as Disney taught us, a beast can have a heart of gold and talking cutlery.

First, though, I emphasize that the determinant is not strictly necessary to get started in linear algebra. For a thorough explanation of this, see Axler's article Down with Determinants (http://www.axler.net/DwD.pdf), and his textbook Linear algebra done right. This explains the pedagogical decision by some authors to postpone treating determinants until later chapters of their texts: the complicated formula and the mechanics of working with determinants are simply a distraction from one's initial goals in linear algebra (learning about vectors, linear transformations, bases, etc).

Yes the later chapters are still crucial.

Fundamentally, determinants are about volume. That is, they generalize and improve the notion of the volume of the parallelipiped (= higher dimension version of a parallelogram) swept out by a collection of vectors in space.  This is not the place to give a treatise on exterior algebra, the modern language via which mathematicians explain this property of determinants, so I refer you to the eponymous Wikipedia article. The subtle point is that while we are used to thinking of vector spaces as n-dimensional Euclidean space (R^n), with volume defined in terms of the usual notion of distance (the standard inner product on R^n), in fact vector spaces are a more abstract and general notion. They can be endowed with alternate notions of distance (other inner products), and can even be defined over fields other than the real numbers (such as the rational numbers, the complex numbers, or a finite field Z/p). In such contexts, volume can still be defined, but not "canonically": you have to make a choice (of an element in the 1-dimensional top exterior power of your vector space). You can think of this as fixing a scale. The useful property of determinants is that while the scale you fix is arbitrary, the volume-changing effect of a linear transformation of your vector space is independent of the choice of scale: it is exactly the determinant of said linear transformation. This is why the answer to your question, "Are there any real-life applications of determinants?" is undoubtedly yes. They arise all the time as normalization factors, because it is often a good idea to preserve the scale as you perform operations on vectors (such as data points in R^n). [This can be important, for example, to preserve the efficacy or improve the efficiency of numerical algorithms.]

Now what about the applications you mention, such as testing linear independence of a set of n vectors in an n-dimensional vector space (check if the determinant is nonzero), or inverting a matrix (via Cramer's rule, which involves a determinant), or — to add another — finding eigenvalues of a matrix (roots of a characteristic polynomial, computed as a determinant)?  These are all reasonable things to do, but in practice I believe they are not very efficient methods for accomplishing the stated goals. They become slow and unwieldy for large matrices, for example, and other algorithms are preferred. Nonetheless, I firmly believe that everyone should know how to perform these tasks, and get comfortable doing them by hand for 2 by 2 and 3 by 3 matrices, if only to better understand the concepts involved. If you cannot work out the eigenvalues of a 2 by 2 matrix by hand, then you probably don't understand the concept, and for a "general" 2 by 2 matrix a good way to do it quickly is to compute the characteristic polynomial using the "ad-bc" formula for a 2 by 2 determinant.

You ask whether determinants have other uses in linear algebra. Of course they do. I would say, in fact, that they are ubiquitous in linear algebra. This ubiquity makes it hard for me to pin down specific examples, or to point to nice motivating examples for your students. But here is a high-brow application in abstract mathematics. Given a polynomial [math]a_n x^n + \cdots + a_1 x + a_0[/math], how can we tell if it has repeated roots without actually factoring it or otherwise finding the roots? In fact, there is an invariant called the discriminant which gives the answer. A certain polynomial function [math]\Delta(a_0,\ldots, a_n)[/math] can be computed, and this vanishes if and only if the original polynomial has a repeated root.  Where does the discriminant come from? It is essentially the determinant of a rather complicated matrix cooked up from the numbers [math]a_0,\ldots, a_n[/math].

A more down-to-earth application that might be motivating for some students is the Jacobian determinant that enters, for example, into change-of-variables formulas when studying integrals in multivariable calculus. If you ever find yourself needing to to work with spherical coordinates and wonder why an integral with respect to [math]d x d y d z[/math] becomes an integral with respect to [math]\rho^2 \sin \phi d\rho d\phi d\theta[/math], the answer is: a certain determinant is equal to [math]\rho^2 \sin \phi[/math]. Of course, depending on the university, a course in linear algebra might precede multivariable calculus, which would make this "motivating example" less useful.

Another remark to make is that for many theoretical purposes, it suffices simply to know that there is a "nice" formula for the determinant of a matrix (namely, a certain complicated polynomial function of the matrix entries), but the precise formula itself is irrelevant. For example, many mathematicians use constantly the fact that the set of polynomials of a given degree which have repeated roots, is "cut out" from the set of all polynomials of that degree, by a condition on the coefficients which is itself polynomial; indeed, this is the discriminant I mentioned above. But they rarely care about the formula for the discriminant, merely using the fact that one exists. E.g., simply knowing there is such a formula tells you that polynomials with repeated roots are very rare, and in some sense pathological, but that if you are unlucky enough to get such a guy, you can get a nice polynomial with distinct roots simply by wiggling all the coefficients a bit (adding 0.00001 to any coefficient will usually work). [I can expand on this if you post another question on the subject.]

What is the importance of determinants in linear algebra?

Leave a comment

Filed under Life

What are some of the best travel locations that most people have never heard of?


Answer by Mimi Copi:

The Lofoten Islands in Norway are an incredible place for people who love nature.

I went there 8 years ago while visiting one of my friend studying in Scandinavia. We organized a road trip to the Lofotens. The area is unknown and we were almost the only tourists there.

You can rent small and really cosy fishermen houses named Rorbu on the fjords like the one below and enjoy the calm and the amazing view.
On the link below, more info about how to rent a rorbu:
Fishermen's cabin in Trøndelag, Nordland, Finnmark, Troms and Møre og Romsdal – Official Travel Guide to Norway – visitnorway.com

And if your lucky like we were, you will have the opportunity to see the well known polar lights that will make your night magical.

Enjoy!

Mariam

What are some of the best travel locations that most people have never heard of?

Leave a comment

Filed under Life

What is the Pi value and how is it derived?


This is an important result since it answers the ancient question of squaring the circle.  That question asks: given a circle, can you construct a square with the same area as that circle using the Euclidean tools of straightedge and compass

Answer by David Joyce:

You ask if π can be expressed simply as a rational number or in surd form.

By surd form, you mean an expression in terms of roots along with the arithmetic operations of addition, subtraction, multiplication, and division, starting with whole numbers.  An example of a surd form is

                                           [math]\displaystyle \frac{10-\sqrt3}{\sqrt[3]7+\sqrt{5+\sqrt {11}}}.[/math] 

Numbers that can be so expressed are all algebraic numbers, that is, roots of polynomial equations with integer coefficients. Numbers that are not algebraic are called transcendental numbers.  Rational numbers are included among the algebraic numbers.

Lindemann (1852–1939) proved in 1882 that π is a transcendental number.  Therefore, π cannot be expressed as a surd.

This is an important result since it answers the ancient question of squaring the circle.  That question asks: given a circle, can you construct a square with the same area as that circle using the Euclidean tools of straightedge and compass.  The ancient Greek geometers conjectured it couldn't be done, but they had no proof.  Any construction would show that π could be expressed in surd form.  Lindemann's 1882 theorem, therefore, finally proved this ancient conjecture that a circle can't be squared using Euclidean tools.

             Ferdinand von Lindemann

What is the Pi value and how is it derived?

Leave a comment

Filed under Life

What’s your opinion about WebSockets and using it in production over XHR and more folklore strategies?


Answer by Yad Faeq:

To answer your question and avoid any biases or wrong suggestions I will walk through each of those technologies that relate to the topic, then explain what I think of them, including XHR:

First let's start with what we have come along to since early days of Web, otherwise my comment on WebSockets are going to sound like am grilling it:

  • Web started static, let's say around 1995 people / majority of public got on. CGI was used perhaps to generate some content.
  • Then around 2005, Ajax came along. Remember those little tiny cool things on Yahoo! (company)'s front page? It was to change the content without the user refreshing the page. Content getting exchanged in the background.
  • Then somewhere around 2006 more improvement of Ajax + Http came along:
    AJAX
    / XMLHttpRequest: thread in the background running while user is making interactions with the page.

  • There are instant improvement on these technologies from different communities that work on them (at that time Oracle (company)'s folks were on top of it). Sometime around 2006 Long Polling / COMET was introduced. Here is what it would look like:

    Another scenario:

    OR this cluster mess when PHP was in the arena at that time:

  • It's an illusion of the actual feature not the actual feature working, and the feature being instant update (real-time).
    Technically the connection is hold for a little more time to wait for the next event.
  • I can say that HTTP Streaming came around afterward, but not so common by this time. The response body was never closed after sending the event. This was cool, since it brought application level into the game. However, the magic difference technically was something like this:

    Or:

    HTTP Streaming would force:

  •   The state of many web applications is inherently volatile.  Changes can come from numerous sources, such as other users, external  news and data, results of complex calculations, triggers based on the  current time and date.
  • HTTP requests can only emerge from the client. When a state  change occurs, there's no way for a server to open connections to  interested clients.
    src:  HTTP Streaming – Ajax Patterns
  • Server Sent Events was introduced to start over clean and bring the nitizens of world together. But it was technically an open technology on it's own, rather than using a specific Framework to perform and make it happen. This is what it looks  like in work:
  • Or:

Now, most of the above ways had issues such as: timeouts due to heavier requests. Some routers / networks wouldn't let partial responses to be sent or data buffering over the network bandwidth. There were / are hundreds of hacks to fix and patch issues with the above ways to achieve live updates, yet not the ideal.

  • Around 2010 and the following years till 2013, late 2012 WebSockets was introduced from different projects and communities.
     WebSockets:

    RFC 6455 – The WebSocket Protocol
    Abstract
       The WebSocket Protocol enables two-way communication between a client    running untrusted code in a controlled environment to a remote host    that has opted-in to communications from that code.  The security    model used for this is the origin-based security model commonly used    by web browsers.  The protocol consists of an opening handshake    followed by basic message framing, layered over TCP.  The goal of    this technology is to provide a mechanism for browser-based    applications that need two-way communication with servers that does    not rely on opening multiple HTTP connections (e.g., using    XMLHttpRequest or <iframe>s and long polling).

    WebSockets were better from COMET and some earlier methods in regards of upstream, but still can't do full streaming. In websockets, TCP's support was all in, fully controlled. This was possible earlier with Applets, but I guess not too adaptable, hence WB became that standard every client says I want to talk on to when the web services called.

    The TCP opens the connection up, and HTTP starts playing around, this was the jam at this point.

    Now for, Websockets VS it's earlier cousin HTTP streaming. Here is an awesome comparison from Alex Recarey on Stack Overflow explaining it in this way:

    • Websockets and SSE (Server Sent Events) are both capable of pushing  data to browsers, however they are not competing technologies.
    • Websockets connections can both send data to the browser and receive  data from the browser. A good example of an application that could use  websockets is a chat application.
    • SSE connections can only push data to the browser. Online stock  quotes, or twitters updating timeline or feed are good examples of an  application that could benefit from SSE.
    • In practice since everything that can be done with SSE can also be  done with Websockets, Websockets is getting a lot more attention and  love, and many more browsers support Websockets than SSE.

    However, it can be overkill for some types of application, and the  backend could be easier to implement with a protocol such as SSE.

    I don't want to copy the rest, but read the answer here:

    Source: WebSockets vs. Server-Sent events/EventSource

I can say for sure that the different attempts on Ajax Push approaches set the foundation for WebSockets soon with HTML5. Now, I don't want to go deeper for what have been better and what not. Since, there are lots of tension (biases) at least to my understanding in between supporting which technology and which not to let the sun shine on them. One major bottleneck is that few key players among startups / companies might bash the other, sometimes the joke becomes real and actually the technology runs out of support, hence it dies slowly and gracefully in between the server /\ client.

Here is what I can tell you have been a good practice for me:

  • With REST based web apps there are already too much to deal with when the codebase grows. Brining another layer of complexity can mess things up if the team is not that big, budget is low, if the project is going to have a major reshape and such.
    1. In my scenario one time we kept some of the Long Polling in house solutions alive.
    2. Slowly move to Websocket, knowing that it's not going to be the ultra loaded side of the service.
    3. Using alternative solutions that have been built on things like "The Bayeux Protocol"
      • One famous implementation of it is called: Faye

      Authored by James Coglan

  • Perhaps looking into new solutions from the Application layer would be good idea (improve web apps to overcame the connection issues).

In case if this was too much to deal with, there are solutions that offer quite a huge amount of requests for free and to test out, here are some of them:

Startups:

Bigger scale and control over lower layers of the app:

OR (this is a big or here notice:)
Google Cloud: Architecture: Real Time Stream Processing – Internet of Things:

More: on Real-time Gaming with Node.js + WebSocket on Google Cloud Platform

Just don't get stuck in the middle of building product and realizing the technology you chose is not fitting at all. Best way to look at it as I mentioned before asking questions like:

  • $$, Budget around the project?
  • Human cap, how much the team can get done?
  • Smooth transitions, how to build web apps that can have loose coupling and ability to be changed over time?
  • Community and support around the technology?
  • What does the folks at Google Products and Services or Facebook or Nowadays Amazon Web Services think about it?


I skipped a lot in the timeline to reach the WB topic, otherwise during that time there were other things offered (still), such as Wsdl / SOAP (Simple Object Access Protocol).

On Quora I answered another question around a similar topic:
Yad's answer to What are the technical challenges of building a realtime messaging platform?
Which might be relevant somehow here to explain the other pieces not mentioned in detail.

Harold Carr ,(The Architect behind the legendary SOAP),
gave an awesome talk back in 2013 around all of this, I would highly recommend it if you are interested to learn more:

https://www.youtube.com/watch?v=B-ElrhYxPQU


Feel free to suggest any corrections, because I've most probably missed few things if not a bunch here.

What's your opinion about WebSockets and using it in production over XHR and more folklore strategies?

Leave a comment

Filed under Life

How do you loop through a complex JSON tree of objects and arrays in JavaScript?


This is an example of depth-first traversal

Answer by Steve Schafer:

What you want is a tree traversal routine. Because of the quirks of JavaScript, you have to treat the three main kinds of JavaScript "things" (objects, arrays, primitive values) differently. Here's a simple traversal function that accepts any JavaScript object tree and traverses it, printing out an indented view of the tree:

function traverse(x, level) {
  if (isArray(x)) {
    traverseArray(x, level);
  } else if ((typeof x === 'object') && (x !== null)) {
    traverseObject(x, level);
  } else {
    console.log(level + x);
  }
}

function isArray(o) {
  return Object.prototype.toString.call(o) === '[object Array]';
}

function traverseArray(arr, level) {
  console.log(level + "<array>");
  arr.forEach(function(x) {
    traverse(x, level + "  ");
  });
}

function traverseObject(obj, level) {
  console.log(level + "<object>");
  for (var key in obj) {
    if (obj.hasOwnProperty(key)) {
      console.log(level + "  " + key + ":");
      traverse(obj[key], level + "    ");
    }
  }
}

This is an example of depth-first traversal (there are other kinds). To use, just replace the

console.log()

calls with "real" code.

How do you loop through a complex JSON tree of objects and arrays in JavaScript?

Leave a comment

Filed under Life