Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Wikipedia:Reference desk/Archives/Mathematics/2009 November 23

From Wikipedia, the free encyclopedia
Mathematics desk
< November 22 << Oct | November | Dec >> November 24 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 23

[edit]

An interesting trig problem

[edit]

Dear Wikipedians:

I'm trying to solve for general and exact solution for the following equation:

sin3x = cos2x

where x is in the domain of (-π,π).

The solution I have so far is as follows:

(Equation 1)

AND

(Equation 2)

Equation 1 yields:

and the general solution from equation 1 is:

AND

Equation 2 yields:

and the general solution from equation 2 is:

AND

So altogether I was able to get all the positive roots π/10, π/2, 9π/10 from the general solution that fits in the domain. However, my graphing calculating clearly shows that there are two more negative roots in the domain: -3π/10, -7π/10. I am at a loss to get these roots from the elegant approaching I am using so far. Is there some way of getting the negative roots from the method I have used above, or do I have to resort to solving nasty high-degree equations?

Thanks for all the help.

70.31.153.35 (talk) 02:09, 23 November 2009 (UTC)[reply]

or right? (Igny (talk) 02:26, 23 November 2009 (UTC))[reply]
Resolved

Means with logarithmic data

[edit]

How is the following possible: for group 1, the arithmetic mean of a variable is around 10000, while for group 2 it is around 12000. However, when I transform the variable into logs with base 10, the arithmetic mean for group 1 is greater than for group 2 (and, if I take 10 to these powers, I should get the geometric mean, but then the mean for group 1 is still bigger than group 2. So, looking at the data before transformation I conclude that group 1 is richer (the variable is gdp per capita), but after the transformation I draw the opposite conclusion. —Preceding unsigned comment added by 96.241.9.31 (talk) 05:12, 23 November 2009 (UTC)[reply]

On mathematical terms, this is not at all surprising. The arithmetic and geometric means are different things, so obviously you can find a pair of sets of data where the AM of the first is larger while the GM of the second is larger. This usually happens when the first has a higher variance. A simple example is (1, 7) versus (3, 4).
As for the conceptual interpretation - congratulations, you now know How to Lie with Statistics (saying "average" without specifying which kind, while secretly choosing the one that best fits the result you are trying to prove). If you have no interest in lying, the choice of average depends on what you are trying to do - I think AM has more to do with economic prowess, while GM or median are better indicators of overall welfare. -- Meni Rosenfeld (talk) 07:27, 23 November 2009 (UTC)[reply]

Thank you, that was exceedingly helpful. —Preceding unsigned comment added by 96.241.9.31 (talk) 13:56, 23 November 2009 (UTC)[reply]

dipole moment

[edit]

this his been used to reference a physics article on electric dipole moment but it doesnt seemt ot follow to me.

http://books.google.com/books?id=LIwBcIwrwv4C&pg=PA81#v=onepage&q=&f=false

the step where he goes

surely the introduction of a r term on top needs to be cancelled by making the r term on the bottom cubic? —Preceding unsigned comment added by 129.67.39.44 (talk) 12:51, 23 November 2009 (UTC)[reply]

Looking closely at the link you gave, I see there is a "hat" over the r, which means it is a unit vector
so we have
which is correct. Gandalf61 (talk) 13:22, 23 November 2009 (UTC)[reply]

Transpose and tensors

[edit]

I posed a question on Talk:Transpose that didn't get any responses there. Perhaps this is a better audience since it's a bit of an essoteric question for such an elementary topic; Here's the question again:

I'm confused. It seems like much of linear algebra glosses over the meaning of transposition and simply uses it as a mechanism for manipulating numbers, for example, defining the norm of v as .
In some linear-algebra topics, however, it appears that column and row vectors have different meanings (that appear to have something to do with covariance and contravariance of vectors). In that context, the transpose of a column vector, c, gives you a row vector -- a vector in the dual space of c. I think the idea is that column vectors would be indexed with raised indices and row vectors with lowered indices with tensors.
Here's my confusion: If row vectors and column vectors are in distinct spaces (and they certainly, even in elementary linear algebra in that you can't just add a column to a row vector because they have different shapes), then taking the transpose of a vector isn't just some notational convenience, it is an application of a nontrivial function, . To do something like this in general, we can use any bilinear form, but that involves more structure than just
So:
  1. Is it correct that there are are two things going on here: (1) using transpose for numerical convenience and (2) using rows versus columns to for indicateng co- versus contravariance?
  2. Isn't the conventional Euclidean metric defined with a contravariant metric tensor: ? Doesn't that not involve any transposition in that both vs have raised indices?
Thanks. —Ben FrantzDale (talk) 14:16, 23 November 2009 (UTC)[reply]
I guess it depends on how we define vectors. If we consider a vector as just being an n by m matrix with either n=1 or m=1, then transposition is just what it is with any other matrix - a map from the space of n by m matrices to the space of m by n matrices. --Tango (talk) 14:38, 23 November 2009 (UTC)[reply]
Sure. I'm asking because I get the sense that there are some unwritten ruels going on. At one extreme is the purely-mechanical notion of tranpose that you describe, which I'm happy with. In that context, transpose is just used along with matrix operations to simplify the expression of some operations. At the other extreme, rows and columns correspond to co- and contra-variant vectors, in which case transpose is completely non-trivial.
My hunch is that the co- and contravariance convention is useful for some limited cases in which all transformations are mixex of type (1,1) and all (co-) vectors are either of type (0,1) or (1,0). But that usage doesn't extend to problems involving things like type-(0,2) or type-(2,0) tensors since usual linear algebra doesn't allow for a row vector of row vectors. My hunch is that in this case, transpose is used a kludge to allow expressions like to be represented with matrices as . Does that sound right, or am I jumping to conclusions? If this is right, it could do with some explanation somewhere. —Ben FrantzDale (talk) 15:13, 23 November 2009 (UTC)[reply]

Using an orthonormal basis, , and That "usual linear algebra doesn't allow for a row vector of row vectors" is the reason why tensor notation is used when a row vector of row vectors, such as , is needed. Bo Jacoby (talk) 16:53, 23 November 2009 (UTC).[reply]

Also note that there is no canonical isomorphism between V and V* if V is a plain real vector space of finite dimension >1, with no additional structure. What is of course canonical is the pairing VxV* → R. Fixing a base on V is the same as fixing an isomorphism with Rn, hence produces a special isomorphism V→V*, because Rn does possess a preferred isomorphism with its dual, that is the transpose, if we represent n-vectors with columns and forms with rows. Fixing an isomorphism V→V* is the same as giving V a scalar product (check the equivalence), which is a tensor of type (0,2), that eats pairs of vectors and defecates scalars. --pma (talk) 18:46, 23 November 2009 (UTC)[reply]
Those are great answers! That clarifies some things that have been nagging me for a long time! I feel like It is particularly helpful to think that conventional matrix notation doesn't provide notation for a row of row vectors or the like. I will probably copy the above discussion to Talk:Transpose for postarity and will probably add explanation along these lines to appropriate articles.
I haven't worked much with complex tensors, but your use of conjugate transpose reminds me that I've also long been suspiscious of its "meaning" (and simply that of complex conjugate) for the same reasons. Could you comment on that? In some sense on a complex number, is the same operation as on a vector, using conjugate transpose as a mechanism to compute . For a complex number, I'm not sure what would generalize to "row vector" or "column vector"... I'm not sure what I'm asking, but I feel like there's a little more that could be said connecting the above great explanations to conjugate transpose. :-) —Ben FrantzDale (talk) 19:19, 23 November 2009 (UTC)[reply]
A complex number (just as a real number) is a 1-D vector, so rows and columns are the same thing. The modulus on can be thought of as a special case of the norm on (ie. for n=1). From an algebraic point of view, complex conjugation is the unique (non-trivial) automorphism on that keeps fixed. Such automorphisms are central to Galois theory. I'm not really sure what the importance and meaning is from a geometrical or analytic point of view... --Tango (talk) 19:41, 23 November 2009 (UTC)[reply]

Let V and W be two vector spaces, and ƒ : VW be a linear map. Let F be the matrix representation of ƒ with respect to some bases {vi} and {wj}. I seem to recall, please do correct me if I'm wrong, that F : VW and FT : W* → V* where V* and W* are the dual spaces of V and W respectively. In this setting vT is dual to v. So the quantity vTv is the evaluation of the vector v by the covector vT. ~~ Dr Dec (Talk) ~~ 23:26, 23 November 2009 (UTC)[reply]

Goldbach's conjecture

[edit]

Several weeks ago I saw a Wikipedia web page where one could submit a "proof" of Goldbach's conjecture. However, not having any proof to submit, I didn't pay attention to the page's URL, and have forgotten it. Does anyone know how to reach this page, or has it since been deleted?Bh12 (talk) 15:19, 23 November 2009 (UTC)[reply]

Wikipedia is definitely not the place to submit proofs (or "proofs") of new results. If such a page existed on Wikipedia, it indeed would (or should) have been deleted, though chances are that you actually saw it on a different site (maybe using the MediaWiki software). — Emil J. 17:27, 23 November 2009 (UTC)[reply]

Square of Opposition and "The subcontrary of the converse of the altern of a FE is T"

[edit]

I have a question that emerged from a basic logic class I am taking. Forgive me if this isn't the right desk section, but I'm honestly confused on whether to stick this under math, humanities (philosophy), or miscellaneous. It seems to me that this section is most appropriate.

We have been taught the traditional Square of Opposition. We were given statements like the following, and asked to use the square to them label "true" or "false:"

(1) The converse of the altern of a TA proposition is UN.

(2) The contradictory of the converse of a FA proposition is T.

(3) The contrary of the converse of a TE is F.

A, E, I, and O stand for types of propositions (see here ), T stands for true, F stands for false, and UN for unknown truth value. T or F before a letter means a true or false proposition of that type (FO means a false O claim). For these exercises, we are obviously going by traditional Aristotelian logic, in which e.g. A claims are taken to imply I claims.

Using the Square, we know that (1) is false, (2) is considered false because the truth value is UN, not T, and (3) is true. That's not the problem.

The problem crops up when we deal with something like the following, a problem that I devised:

(4) The subcontrary of the converse of the altern of a FE is T.

I believe that (4) is true. My reasoning: Whenever we have a FE, we have a TI (contradictories). An "I" claim has an interchangeable converse ("some S is P" also means "some P is S"). The altern of an FE is an O of unknown truth value. The converse of that O ("some P is not S") will be unknown as well, of course. But the subcontrary of the converse of the O is the same as the converse of the I. We know that "some P is S" (the converse of the I) is T. So, we know that the subcontrary of the converse of the altern of a FE is T. Therefore, I believe (4) should be marked "true."

However, I presented my reasoning for (4) above, and my logic professor disagrees with me. She points out that the altern of a FE proposition is UN, and the converse of an O is also UN. As far as I can understand, she says that whenever one runs into an UN, one cannot move any further. So she teaches that (4) is false, for the truth value is UN, not T. I'm afraid I can't quite see her reasoning. I am curious to find a third opinion.

So, do you agree or disagree with my result? Please explain your reasoning. I can go into more detail about my own, but this question is already a little long. 99.174.180.96 (talk) 18:40, 23 November 2009 (UTC)[reply]

Oh.. I do not have an answer; actually I'm not even sure I understand, but this brings me back to my childhood, when they made us learn by hart Barbara, Celarent, Darii, Ferio,... and if you weren't ready, it was whips! I never managed to understand that stuff :-( Elementary school was a hard life for a kid, those times. Anyway, hope somebody has some more factual hint. --pma (talk) 23:18, 23 November 2009 (UTC)[reply]
Ok, I'm trying to understand this. By "The subcontrary of the converse of the altern of a FE is T" do you mean that "No S is P" is false implies that "Some P is S" is true? If you know how to express this in terms of set theory (if that's even possible) it would make more sense to me. Jkasd 08:11, 24 November 2009 (UTC)[reply]

Try to translate into a more modern notation. Consider the probability that a random P is S, x=|S|/|P|, assuming |P|>0. Then the A-claim "all S are P" means x=1. The E-claim "No S is P" means x=0. The I-claim "Some S is P" means x>0 and the O-claim "some S is not P" means x<1. Bo Jacoby (talk) 08:51, 24 November 2009 (UTC).[reply]

99.174.180.96 - I agree with your reasoning. If "No S is P" is false then some S is P, and therefore some P is S. If "No S is P" is false then you cannot conclude anything about the truth value of "Some S is not P", but that does not mean you cannot conclude anything about the truth value of "Some S is P" or "Some P is S". Your teacher may be interested in the new-fangled ideas of Mr. Boole and Mr. Venn, who claim to have reduced logic to simple algebra. Gandalf61 (talk) 11:36, 24 November 2009 (UTC)[reply]
This is the OP speaking--Thanks for your replies. I'm sorry that my post was hard to understand.
pma: thanks for the story. :)
Jkasd: Yes, in response to your question, I believe that
The subcontrary of the converse of the altern of a FE is T
means that
"No S is P" is false implies that "Some P is S" is true
However, I cannot be 200% sure of that inference as a PhD-level professor of philosophy believes that I have gone wrong somewhere, and if she's right, something must be wrong with my logical insight into this problem. :)
I strongly suspect that someone could express my problem in terms of set theory. I'm afraid that I don't know how, though. Part of the difficulty is that there are puzzles regarding the "existential import" of the propositions of Aristotelian logic. To the extent that I know what that means, I know that I can't be confident in trying to shift between the two. But thanks for your time.
Bo Jacoby: I'm sorry, but I'm confused. You say "Consider the probability that a random P is S" (remember, P before S). But if all S is P (S before P), then that probability is not necessarily 1. There may still be plenty of P that is not S, so there is no certainty that a random P is S. It's "all P is S" that gives x = 1. I think either you got something backwards or I don't understand your probability equations correctly.
Sorry for spreading confusion. I probably ment to write "Consider the probability that a random S is P" (S before P). Does that make sense? Bo Jacoby (talk) 23:12, 25 November 2009 (UTC).[reply]
Gandalf61: Thank you for your input. I should be fair to my teacher: we're not learning only pure traditional Aristotelian logic. We have done stuff with sentential variables and propositional logic, and are familiar with Venn diagrams. It's only a first year course. 99.174.180.96 (talk) 14:48, 24 November 2009 (UTC)[reply]
Set theory representation here is straightforwrd. "No S is P" can be expressed as ("S intersection P is empty"). So ""No S is P" is false" becomes ("S intersection P is not empty"). But this is exactly the same as the representation of "Some S is P", and as "Some P is S". So all three statement are equivalent. With respect to all concerned, I think part of the difficulty may be that your professor is a professor of philosophy, not a professor of mathematics. Gandalf61 (talk) 15:45, 24 November 2009 (UTC)[reply]
Oh yeah. For some reason I was trying to think in terms of subsets and set membership. Jkasd 08:32, 25 November 2009 (UTC)[reply]

£1,250 million

[edit]

How many digits should this number have? I interpret it as 1,250,000,000. A version of the assertion is found here[1] (search for "million"). Long and short scales doesn't seem to cover this, which is more about how to write down numbers than their actual definition.   Will Beback  talk  23:55, 23 November 2009 (UTC)[reply]

Unless I'm misunderstanding your question, it most certainly is covered in that page. The long and short scales both agree that 1 million is 1,000,000. Where they start to disagree is at 1 billion, which is 1,000,000,000 in the short scale and 1,000,000,000,000 in the long scale. --COVIZAPIBETEFOKY (talk) 01:56, 24 November 2009 (UTC)[reply]
Alternatively, they both agree that 6 zeros is a million, 7 zeros is ten million, and 8 zeros is a hundred million. Where they start to disagree is nine zeros... 92.230.68.236 (talk) 15:59, 24 November 2009 (UTC)[reply]

But either way—long or short scale—1,250 million is 1,250,000,000. So the long-versus-short-scale issue doesn't seem to bear on that question. Michael Hardy (talk) 02:47, 24 November 2009 (UTC)[reply]

It is only an issue if you choose to interpret the comma as a european decimal point. -- SGBailey (talk) 16:57, 24 November 2009 (UTC)[reply]
very creative, but no one would write a trailing zero in that case. 92.230.68.236 (talk) 17:49, 24 November 2009 (UTC)[reply]
Of course one could. A trailing zero in this case would be a perfectly usual way of specifying that the number has four significant digits. — Emil J. 18:04, 24 November 2009 (UTC)[reply]
SGBailey: Us Brits are "Europeans", but we don't use a comma as a decimal point. We write ½ as 0.5 and 105 as 100,000. ~~ Dr Dec (Talk) ~~ 18:53, 24 November 2009 (UTC)[reply]
In case anyone is interested in which notation different countries use, wikipedia has the article Decimal separator. Aenar (talk) 19:19, 24 November 2009 (UTC)[reply]
Thanks for the replies. I got hold of the original paper (written by Americans, but concerning the UK and published in the Netherlands). The calculation is £5,000 * 255,000 = £1,275,000,000.   Will Beback  talk  21:14, 24 November 2009 (UTC)[reply]