Wikipedia:Reference desk/Archives/Mathematics/2011 April 16
Mathematics desk | ||
---|---|---|
< April 15 | << Mar | April | May >> | April 17 > |
Welcome to the Wikipedia Mathematics Reference Desk Archives |
---|
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages. |
April 16
[edit]Reason behind definitions of cross product and dot product
[edit]Why is the cross product of two vectors A and B is defined in the particular idiosyncratic way. I would like to know what reasoning led Heaviside and Gibbs to give the particular definitions of crossproduct and dot product.Ichgab (talk) 12:53, 16 April 2011 (UTC)
- To answer your question we need to know what aspect of the definitions you regard as idiosyncratic. Dolphin (t) 12:59, 16 April 2011 (UTC)
What I feel very difficult to understand in cross product is the arbitrary assumption that cross of vectors A and B is a vector perpendicular to the plane of A and B and the sinθ factor in the magnitude of the cross product is also intriguing.Similarly the dot product of vectors A and B is arbitrarily defined to be a scalar quantity with a cosθ factor.There must be some reason for introducing such complicated definitions.Iwant to know the reason.Ichgab (talk) 13:27, 16 April 2011 (UTC)
- The reason is justified by the algebraic properties that these expressions satisfy in an orthonormal coordinate system: Thus the dot product is the sum of the products of the components of the vectors, the cross product is expressible as a determinant. The reason the dot and cross products aren't defined this way is that, a priori, such a definition would depend on the coordinate system. So it's better to derive these properties from the coordinate-independent definition given by Gibbs and Wilson. Sławomir Biały (talk) 13:56, 16 April 2011 (UTC)
- The dot product of a vector with a unit vector of an orthogonal coordinate system gives the coordinate of the vector along the direction of the unit vector. The cross product gives the area of the parallelogram formed by two vectors and areas have an orientation - one counts the other side as minus. In addition A.(BxC) gives the volume of the parallelepiped formed by vectors A,B and C, and it can also be negative depending on the orientation of the vectors. Dmcq (talk) 15:17, 16 April 2011 (UTC)
- By the way I see in the history of cross product that Joseph Louis Lagrange introduced both of them. Dmcq (talk) 15:24, 16 April 2011 (UTC)
To give a succinct answer: the dot product is defined as it is because it is mathematically natural; the cross product because it is useful for stating certain physical laws, most importantly Maxwell's equations. (The cross product allows the relationship between electric and magnetic fields to be written in a simple way.) Looie496 (talk) 16:09, 16 April 2011 (UTC)
The dot product of two vectors yields a scalar (a real number that does not change if you calculate the dot product for the same vector, but now written in another coordinate system). The cross product of two vectors yields another vector (and remember that a vector is not just a list of real numbers, it transforms under rotations). Then as Looie explains these expressions occur in physics, the reason is that the laws of physics don't have a preference for any coordinate system (an experiment done in deep space inside a completely isolated box will yield the same result irrespective the orientation of the box).
Similarly, the determinant and the trace of a matrix are invariant under coordinate transformations, so I'm afraid you have to learn these things too :) . Count Iblis (talk) 22:02, 16 April 2011 (UTC)
Great explanations! So it seems the only unexplained arbitrariness is the direction of the cross product. Is that because most people use their right hand, thus convened on a right-hand rule? — Sebastian 14:30, 17 April 2011 (UTC)
- That's right. Note that the cross product of two vectors actually yields a so-called "pseudo-vector". It transforms under rotations just like an vector does, but not under reflections. Consider a mirror, a vector A that is parallel to the mirror, one that points toward the mirror (B) and the cross product of these two (C). Then what you see in the mirror is that the mirror image of A points parallel to A, the mirror image of B points opposite to the mirror, but the mirror image of C also points parallel to C. Now, if you cnange the sign of one vector the cross product changes sign, so the cross product of the mirror image of A with the mirror image of B does not yield the mirror image of C. So, you see that in the "mirror world" the cross product is computed using the left hand rule :) Count Iblis (talk) 15:32, 17 April 2011 (UTC)
- A few pointers: a lot more detail's found at pseudovector, which relates it to the cross product. A more historical perspective can be found at quaternion#History, describing how the cross product arose from the quaternion product, developed forty years earlier. The quaternion product incorporates the cross product and the dot product, and combines them into an associative, invertible product. Yet another approach is to replace the cross product with the exterior product giving a bivector result, a product which can be generalised to higher dimensions.--JohnBlackburnewordsdeeds 15:50, 17 April 2011 (UTC)
Maximal Ideals
[edit]Let A be a set and B be a ring. Let Ω(A,B) be the ring of functions A → B, where the ring operations are given by
For each a in A, define an ideal in Ω(A,B) by
Clearly it's an additive subgroup of Ω(A,B). For all ƒ in Ia and g in Ω(A,B), ƒ·g is in Ia since
What I would like to know is this: when is Ia a maximal ideal of Ω(A,B)? I know that if we replace Ω(A,B) by the set of smooth functions from a manifold M to the real numbers R, that Ix is a maximal ideal for all x in M. But how do I prove that? In general, what are necessary and sufficient conditions for Ia to be a maximal ideal of Ω(A,B)? — Fly by Night (talk) 21:46, 16 April 2011 (UTC)
- In each case, the ideal is the kernel of the homomorphism that evaluates a function at a (or x). Thus in the first case, Ia is a maximal ideal if and only if B is a field (since B = Ω(A,B)/Ia). In the second case, since the evaluation homomorphism maps onto the real field, Ix is always maximal. Sławomir Biały (talk) 21:56, 16 April 2011 (UTC)
- But why? How do I prove this? I would like to understand why. Could you please justify the following:
- Ia is a maximal ideal if and only if B is a field.
- I understand why B ≅ Ω(A,B)/Ia, but I don't see why Ia is a maximal ideal if and only if B is a field. — Fly by Night (talk) 22:08, 16 April 2011 (UTC)
- (edit conflict) P.S. I believe that if Ω(A,B) is a commutative ring and Ia is a maximal ideal then B is a residual field. Although that requires a commutative condition. — Fly by Night (talk) 22:18, 16 April 2011 (UTC)
- But why? How do I prove this? I would like to understand why. Could you please justify the following:
- As I said, consider the homomorphism that evaluates at a. This has kernel and maps surjectively onto B, so . Hence is maximal if and only if B is a field. Sławomir Biały (talk) 22:15, 16 April 2011 (UTC)
- As I said, but why? :-) I can prove it in one direction, but it seems to require that Ω(A,B) is a commutative ring. — Fly by Night (talk) 22:20, 16 April 2011 (UTC)
- Grr... that obviously isn't true as I stated it in the non-commutative case. Then a necessary and sufficient condition is that B be a simple ring. (In the commutative case, a ring is simple if and only if it is a field.) Sławomir Biały (talk) 22:28, 16 April 2011 (UTC)
- If it's so "obvious" then why did you have to say one thing, before correcting yourself? Thanks for trying Sławomir, but this has left a bad taste in my mouth. — Fly by Night (talk) 01:40, 17 April 2011 (UTC)
- My initial mistake was in assuming that you were only interested in the commutative case. My second mistake was unthinkingly saying "division ring" instead of "simple ring". I didn't mean for this to "leave a bad taste" in your mouth, or indeed any taste at all: I was just trying to answer your question to the best of my abilities. I meant for you to see that the trick is to look at the kernel of the evaluation homomorphism, whatever the precise result you happen to be after. It's the same idea in both cases of your original post. I had hoped that you had the mathematical maturity to understand the trick. I don't see how either my reply or my self-correction warrants this hostile reply of yours. You can take my advice or leave it. And if you prefer that I not reply to your posts in the future, I'd honor that. Sławomir Biały (talk) 01:55, 17 April 2011 (UTC)
- If it's so "obvious" then why did you have to say one thing, before correcting yourself? Thanks for trying Sławomir, but this has left a bad taste in my mouth. — Fly by Night (talk) 01:40, 17 April 2011 (UTC)
- Grr... that obviously isn't true as I stated it in the non-commutative case. Then a necessary and sufficient condition is that B be a simple ring. (In the commutative case, a ring is simple if and only if it is a field.) Sławomir Biały (talk) 22:28, 16 April 2011 (UTC)
- As I said, but why? :-) I can prove it in one direction, but it seems to require that Ω(A,B) is a commutative ring. — Fly by Night (talk) 22:20, 16 April 2011 (UTC)
- As I said, consider the homomorphism that evaluates at a. This has kernel and maps surjectively onto B, so . Hence is maximal if and only if B is a field. Sławomir Biały (talk) 22:15, 16 April 2011 (UTC)