Jump to content

英文维基 | 中文维基 | 日文维基 | 草榴社区

Wikipedia:Reference desk/Archives/Computing/2013 July 4

From Wikipedia, the free encyclopedia
Computing desk
< July 3 << Jun | July | Aug >> July 5 >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


July 4

[edit]

Memory not entirely recognized by laptop

[edit]

I have an eMachines E627 laptop, and I have two 2GB memory cards installed (which, according to the technical specifications of the system, is the maximum amount of memory permissible for this model). However, it seems that only the memory installed in the inner slot gets recognized by the system, whereas the memory installed in the outer slot does not. I experimented with the system and determined that each individual memory card works fine when installed in the inner slot. Is this problem common? How can I get the system to recognize all 4GB of memory installed? 24.47.141.254 (talk) 02:43, 4 July 2013 (UTC)[reply]

What operating system are you using, and how are you determining that it can't see all the memory? RudolfRed (talk) 03:04, 4 July 2013 (UTC)[reply]
Also, have you checked if all 4GB is visible to the BIOS? WegianWarrior (talk) 03:49, 4 July 2013 (UTC)[reply]
I am using Windows 7. When I run setup at the beginning, BIOS shows 2048 MB of memory (instead of the expected 4096 MB), and the "System" function from my Control Panel shows 2.00 GB of memory (not 4.00 GB). During my experiments, I tried inserting one memory card into the outer slot and leaving the inner slot empty, and surely enough, the screen did not display anything when I powered up. 24.47.141.254 (talk) 05:08, 4 July 2013 (UTC)[reply]
If your BIOS does not recognize more than 2GB, there is no way the OS will be able to see it either. Not sure how to fix though, sorry. WegianWarrior (talk) 06:49, 4 July 2013 (UTC)[reply]
Try booting with RAM in the outer slot only. If it doesn't boot, the slot is probably broken. Also, try putting smaller modules in each slot or just the inner slot if you have any available. -- BenRG 23:31, 4 July 2013 (UTC)
I would go to www.crucial.com with that computer and have it scan your computer, and let it tell you how much RAM it sees and how much RAM it can take. I looked for your model there but I didn't find it. But I've found Crucial.com to be reliable about memory. Bubba73 You talkin' to me? 03:29, 5 July 2013 (UTC)[reply]
Or get a program like Speccy: http://www.piriform.com/speccy - it tells you that information. Bubba73 You talkin' to me? 03:32, 5 July 2013 (UTC)[reply]
def pearson_correlation(a,b): # Python 3
    if len(a)!=len(b):
        raise ValueError('samples are of unequal size')
    mean_a=mean_b=0
    for i,j in zip(a,b):
        mean_a+=i
        mean_b+=j
    mean_a,mean_b=mean_a/len(a),mean_b/len(b)
    x=y=z=0
    for i,j in zip(a,b):
        x+=(i-mean_a)*(j-mean_b)
    for i in a:
        y+=(i-mean_a)**2
    for j in b:
        z+=(j-mean_b)**2
    return x/(y*z)**(1/2)

This function takes two samples as two lists and returns the Pearson correlation coefficient of them. Is there any thing wrong with it? Czech is Cyrillized (talk) 02:56, 4 July 2013 (UTC)[reply]

I see some possible problems:
1. The usual way to make multiple assignments on one line in Python is
mean_a, mean_b = 0,0
I think an expression like
mean_a = mean_b = 0
is assigning mean_a to the result of the expression "mean_b = 0" - although in this case that could be 0 anyway, so this might make no difference.
2. Be very careful with integer division in Python. One of the oddities of Python is that any arithmetic expression that includes only integers returns an integer value - so 1/2 will return 0, whereas 1./2 will return 0.5 because 1. is a float not an int. So your final line
return x/(y*z)**(1/2)
will always return 1 (I think) because the value of 1/2 is 0. You could try
return (float(x)/(y*z))**(0.5)
or you could use the sqrt function, but you have to import that from the math module. And if your lists a and b only contain integer values, there could be a similar integer division problem when you are calculating mean_a and mean_b. A good rule of thumb is that wherever you have a division expression that you expect to return a non-integer value, convert the numerator or denominator to a float to force the result to be a float.
3. Not a problem as such, but the loop where you sum the values in lists a and b is not necessary because Python has a built in sum function that returns the sum of values in a list.
Gandalf61 (talk) 11:40, 4 July 2013 (UTC)[reply]
If I understand your mean_a, mean_b calculations properly, and taking into account Gandalf61's wizardly advice about floats, the calculation of those two can be simplified to:
  mean_a = float(sum(a))/len(a)
  mean_b = float(sum(b))/len(b)
turning 5 lines into two (and to my mind being much clearer). edit yeah, I didn't read Gandalf's last point before posting.-- Finlay McWalterTalk 12:48, 4 July 2013 (UTC)[reply]
You can also change those calculations for x, y, z into sums on list comprehensions, like
x= sum([(i-mean_a)*(j-mean_b) for i,j in zip(a,b)])
- although that's not such an obviously clearer piece of code. -- Finlay McWalterTalk 13:01, 4 July 2013 (UTC)[reply]
  • Assignment isn't an expression in Python; x = y = 0 works by special dispensation, not because of expression rules. So it's evidently an officially endorsed way of doing multiple assignment (and it's pretty common in Python code).
  • There's a near universal convention of putting spaces after commas in Python code. You should follow it for readability's sake. It's also good practice to put spaces around infix operators.
  • In Python 3.x, the / operator returns a floating-point result even if the arguments are integers, and // does integer division (as described in PEP 238). Since Python 2.2 (released in 2001), you can write from __future__ import division at the top and then just say sum(a) / len(a), etc. This is a good idea for forward compatibility reasons.
  • Finlay McWalter's sum([...]) can be shortened to sum(...) as of Python 2.4 (released in 2004).
-- BenRG 23:27, 4 July 2013 (UTC)

Why cant Graphics processing units keep up with Central processing units?

[edit]

"Top" GPUs like Tahiti XT in AMD "Radeon 7970" and "7990" series Videocards or as GPU from Nvidia (GK110) in "GeForce GTX 780" or "Titan" labled Videocards work on a frequencies of 900-1000 MHz. Same time "TOP" CPUs from Intel like "core i7 3770K" or "core i7 4770K" run at 3500 MHz and additionally are known to overclock to near 5000 MHz as are CPU from AMD like "AMD FX-4350" starting even from 4200 MHz. Same with "big professional" brands like SPARC T5 (3600 MHz). Why is it that customers willing to pay beyond $ 1000 for a Nvidia "Titan" Videocard only get a 900 MHz GPU and one buying a 3500-4500 MHz CPU like "core i7 4770K" only has to pay $ 300. Also why exactly are GPUs not also clock atleast near with maybe 3000 MHz? --Kharon (talk) 07:18, 4 July 2013 (UTC)[reply]

Roughly, because "clock speed" is a really bad measure of performance. See Megahertz myth. Graphics cards do different things per clock, and they can use massive parallelism. In fact, GPUs are much ahead of CPUs in pure processing power, and there is an increasing trend to tap into that performance for general purpose computing. See General-purpose computing on graphics processing units. The high end Radeon Sky 900 is rated at nearly 6 TeraFLOPS, while the best multi-core i7 CPUs are stuck more than an order of magnitude behind that. --Stephan Schulz (talk) 08:29, 4 July 2013 (UTC)[reply]
Nonetheless it's an interesting question why GPUs tend to be clocked slower than CPUs. (I don't know the answer.) -- BenRG 23:29, 4 July 2013 (UTC)
Because clock speed is not the limiting factor for most GPU workloads. Unlike a typical CPU workload, there is much lower temporal locality in a GPU's operations. The little temporal locality that does exist already gets exploited by using a specialized pipeline. So, with little data reuse, the data transfer rate dominates performance. A very small parallel work-load, say four 32-bit integers wide, requires 128 bits per transaction. A fully-utilized GPU will therefore input and output 128 bits per clock - or, at 1 GHz, will require 256 GBits/sec of available bus bandwidth. No system-bus in the world can sustain that data-rate, at least not in 2013. So, clocking the GPU faster means that the GPU is under-utilized, most of the time. This is unlike the CPU, where a small amount of data is reused over many computer instruction cycles, taking full advantage of the processor cache.
If you look at the structure of a modern GPGPU, you will see that the best effort solution to this problem is to allow the kernel programmer to manage the GPU cache memory in very customized way. When I worked on the Tesla S1070, there were some 16 kilobytes of what you might handwavingly call the L1 cache - and using the CUDA extension to the C programming language, the programmer allocated that memory to each processing kernel (i.e. to each processing unit). Even with this kind of fine-grained control, it was uncommon to get as high as, say, 16 instructions per word for a typical massive parallel calculation. Inevitably, the cache was never very useful, and memory transfer time from main GPU RAM and system RAM overwhelmingly dominated the total process execution time. My program, calculating the wave equation, would burst for a couple microseconds, cranking along at an instantaneous rate of a few gigaflops at a time... and then stall for hundreds of milliseconds waiting to copy the result back to system memory, and copy in new work. Getting "lots of flops" isn't too hard if you have a thousand cores running at a mere 1 GHz! One billion instructions require just one millisecond of work! And our 960-core S1070 is now four-year-old technology - today's GPUs have more cores! Keeping the GPU busy for more than one millisecond out of each second - now that is still a challenge!
In other words, even at a mere 1 GHz, the GPU is still too fast for the memory and system to keep up. Clocking it faster burns power and accomplishes no extra work - the GPU will simply be spending more time waiting in a pipeline bubble executing no-op instructions. You can read about even more recent GPGPU architecture details at Nvidia's website: http://nvidia.com/cuda . Nimur (talk) 01:54, 5 July 2013 (UTC)[reply]
Thank you Stephan Schulz and Nimur for your very informative answers. But i still dont get why they need 2000 "stream cores" running on 900 MHz instead of 500 cores running on 3600 MHz. That wouldnt demand faster memory controlers and cache on Die can run on 3.6 GHz in CPUs already. That would clearly make GPUs much cheaper. --Kharon (talk) 10:01, 6 July 2013 (UTC)[reply]
Not every modern GPU is a massively-parallel behemoth; some are smaller, cheaper, and perform differently - better on some benchmarks, worse on others. For example, Intel HD Graphics are smaller and cheaper than the flagship Nvidia GPUs, yet they perform well enough for many common applications. GPUs for mobile platforms are designed to handle typical mobile workloads, so they have fewer cores and emphasize energy efficiency. Or consider the Tegra 4, specifically on its capability to shuttle data between a GPU, a CPU, and an ISP; it intentionally blurs the boundary between CPU and GPU logic. So, there is a lot of variety in the available platforms; today's technology capability defines a price, power, performance design envelope; and market forces dictate that the products available to end-users cluster around the extreme edges of that design envelope. Nimur (talk) 21:38, 6 July 2013 (UTC)[reply]

How does solar model works

[edit]

Do all scientists use computer simulations and solar models to calculate future for the sun or only certain groups of astrophysics does that. Because I have problem think concrete details that is why I thought [1] when scientist do solar model all the variables are well-written. I cannot tell when they guess on the variable they don't know. When astrophysics create solar model do they have to fill out the data entry they are require to fill out all the variables? How can they guess on certain variables if they have to fill in variables on the solar model? If they don't know they can just leave that entry blank. Do solar model require all variables to be fill out in order to run the simulation? Can these two documents I linked above have alot of errors, I can't tell because I am not a concrete thinker--69.233.254.115 (talk) 20:20, 4 July 2013 (UTC)[reply]

Both articles are not based on the very latest data; one is from 1993 and the other is from 1997. The intended audience is also very different: The first is published in ApJ, a very well respected scientific journal in which astrophysicists publish the results of their research, and uses language and assumes knowledge that other astrophysicists would be familiar with. The second has a more popularist style suitable for being presented in a lecture to university undergraduates and the wider public at an observatory.
As I found out at university, there is no one standard computer model of stellar evolution and each model is (or at least our model was) wildly sensitive to the input conditions. For most models some data is well known, some has pretty good estimates (within a small range of values), and other data can be widely different. However, most computer models will treat a blank value as zero, instead of "I don't know", unless it has specifically been written to account for this. While zero pressure in space isn't too bad an estimate, zero kelvin at the star's surface is a really bad estimate. Astronaut (talk) 18:29, 5 July 2013 (UTC)[reply]
Does how old the articles are really matters on how accurate the information will be? How will newer data make older data more accurate. How will people be able to study things more accurate just 16-20 years timeframe? Is it short time frame phases only 10-50 million years needs deeper studies. So some models can estimate well, and others can range widely. What happens if you write "I don't know" on input entries? in astronomical time scale 100 million years really is only a small fraction of timescale right.--69.233.254.115 (talk) 20:12, 5 July 2013 (UTC)[reply]