Answer by Sam Lichtenstein:
Short version: Yes, determinants are useful and important. No, they are not necessary for defining the basic notions of linear algebra, such linear independence and basis and eigenvector, or the concept of an invertible linear transformation (or matrix). They are also not necessary for proving most properties of these notions. But yes, I think a good course on linear algebra should still introduce them early on.
Long version: Determinants are a misunderstood beast. It's only natural: they are computed via an extremely ugly (to my eye) formula, or a recursive algorithm (expansion by minors), both of which involve annoying signs that can be difficult to remember. But as Disney taught us, a beast can have a heart of gold and talking cutlery.
First, though, I emphasize that the determinant is not strictly necessary to get started in linear algebra. For a thorough explanation of this, see Axler's article Down with Determinants (http://www.axler.net/DwD.pdf), and his textbook Linear algebra done right. This explains the pedagogical decision by some authors to postpone treating determinants until later chapters of their texts: the complicated formula and the mechanics of working with determinants are simply a distraction from one's initial goals in linear algebra (learning about vectors, linear transformations, bases, etc).
Yes the later chapters are still crucial.
Fundamentally, determinants are about volume. That is, they generalize and improve the notion of the volume of the parallelipiped (= higher dimension version of a parallelogram) swept out by a collection of vectors in space. This is not the place to give a treatise on exterior algebra, the modern language via which mathematicians explain this property of determinants, so I refer you to the eponymous Wikipedia article. The subtle point is that while we are used to thinking of vector spaces as n-dimensional Euclidean space (R^n), with volume defined in terms of the usual notion of distance (the standard inner product on R^n), in fact vector spaces are a more abstract and general notion. They can be endowed with alternate notions of distance (other inner products), and can even be defined over fields other than the real numbers (such as the rational numbers, the complex numbers, or a finite field Z/p). In such contexts, volume can still be defined, but not "canonically": you have to make a choice (of an element in the 1-dimensional top exterior power of your vector space). You can think of this as fixing a scale. The useful property of determinants is that while the scale you fix is arbitrary, the volume-changing effect of a linear transformation of your vector space is independent of the choice of scale: it is exactly the determinant of said linear transformation. This is why the answer to your question, "Are there any real-life applications of determinants?" is undoubtedly yes. They arise all the time as normalization factors, because it is often a good idea to preserve the scale as you perform operations on vectors (such as data points in R^n). [This can be important, for example, to preserve the efficacy or improve the efficiency of numerical algorithms.]
Now what about the applications you mention, such as testing linear independence of a set of n vectors in an n-dimensional vector space (check if the determinant is nonzero), or inverting a matrix (via Cramer's rule, which involves a determinant), or — to add another — finding eigenvalues of a matrix (roots of a characteristic polynomial, computed as a determinant)? These are all reasonable things to do, but in practice I believe they are not very efficient methods for accomplishing the stated goals. They become slow and unwieldy for large matrices, for example, and other algorithms are preferred. Nonetheless, I firmly believe that everyone should know how to perform these tasks, and get comfortable doing them by hand for 2 by 2 and 3 by 3 matrices, if only to better understand the concepts involved. If you cannot work out the eigenvalues of a 2 by 2 matrix by hand, then you probably don't understand the concept, and for a "general" 2 by 2 matrix a good way to do it quickly is to compute the characteristic polynomial using the "ad-bc" formula for a 2 by 2 determinant.
You ask whether determinants have other uses in linear algebra. Of course they do. I would say, in fact, that they are ubiquitous in linear algebra. This ubiquity makes it hard for me to pin down specific examples, or to point to nice motivating examples for your students. But here is a high-brow application in abstract mathematics. Given a polynomial [math]a_n x^n + \cdots + a_1 x + a_0[/math], how can we tell if it has repeated roots without actually factoring it or otherwise finding the roots? In fact, there is an invariant called the discriminant which gives the answer. A certain polynomial function [math]\Delta(a_0,\ldots, a_n)[/math] can be computed, and this vanishes if and only if the original polynomial has a repeated root. Where does the discriminant come from? It is essentially the determinant of a rather complicated matrix cooked up from the numbers [math]a_0,\ldots, a_n[/math].
A more down-to-earth application that might be motivating for some students is the Jacobian determinant that enters, for example, into change-of-variables formulas when studying integrals in multivariable calculus. If you ever find yourself needing to to work with spherical coordinates and wonder why an integral with respect to [math]d x d y d z[/math] becomes an integral with respect to [math]\rho^2 \sin \phi d\rho d\phi d\theta[/math], the answer is: a certain determinant is equal to [math]\rho^2 \sin \phi[/math]. Of course, depending on the university, a course in linear algebra might precede multivariable calculus, which would make this "motivating example" less useful.
Another remark to make is that for many theoretical purposes, it suffices simply to know that there is a "nice" formula for the determinant of a matrix (namely, a certain complicated polynomial function of the matrix entries), but the precise formula itself is irrelevant. For example, many mathematicians use constantly the fact that the set of polynomials of a given degree which have repeated roots, is "cut out" from the set of all polynomials of that degree, by a condition on the coefficients which is itself polynomial; indeed, this is the discriminant I mentioned above. But they rarely care about the formula for the discriminant, merely using the fact that one exists. E.g., simply knowing there is such a formula tells you that polynomials with repeated roots are very rare, and in some sense pathological, but that if you are unlucky enough to get such a guy, you can get a nice polynomial with distinct roots simply by wiggling all the coefficients a bit (adding 0.00001 to any coefficient will usually work). [I can expand on this if you post another question on the subject.]