# GPU-Z Shared Memory Layout



## W1zzard (Jul 9, 2008)

```
#define SHMEM_NAME _T("GPUZShMem")
#define MAX_RECORDS 128

#pragma pack(push, 1)
struct GPUZ_RECORD 
{
	WCHAR key[256];
	WCHAR value[256];
};

struct GPUZ_SENSOR_RECORD
{
	WCHAR name[256];
	WCHAR unit[8];
	UINT32 digits;
	double value;
};

struct GPUZ_SH_MEM
{
	UINT32 version; 	 // Version number, 1 for the struct here
	volatile LONG busy;	 // Is data being accessed?
	UINT32 lastUpdate; // GetTickCount() of last update
	GPUZ_RECORD data[MAX_RECORDS];
	GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];
};
#pragma pack(pop)
```

If you use this shared memory in your application please leave a short comment.


----------



## OlafSt (Jul 9, 2008)

STLCD fully supports your Sharedmemory interface 

Greets,

OlafSt


----------



## ps3divx.com (Jul 13, 2008)

hi!

thx for the shared memory support! it works really fine (i use it in C# via a C++ wrapper class) but unfortunately i found no way to access the temperatures and load of both GPUs at the same time (i have a HD3870 X2). 

hmm, is there any way? it's really disappointing that NO tool has this feature..


----------



## GSG-9 (Jul 13, 2008)

I really dont mean to sound stupid here, but is this code I can integrate into a C++ program to utalise memory locations on the videocard instead of standered memory locations?


----------



## brianhama (Jul 13, 2008)

PS3Divx,

Would you be willing to post that wrapper class you created?  If not, would it be feasible to access this shared memory from managed code?  I am primarily a .NET developer and this isnt somthing I have done before.

Thanks,

Brian P. Hamachek


----------



## ps3divx.com (Jul 13, 2008)

hmmm i don't know if i get you right, but i think the answer is "no". this "code" is for guys who want to access the data read by GPUZ in their own programs. so for example i want to use the temperature and core load of both of my GPUs in my own C# application. to get these values i access GPUZ's shared memory, where the data is stored. hope i could help you a bit. 

E: @brianhama: hi, yeah i'm also quite a C++ newbie, but i managed to create something that works for me.. if you like i can give you the files, it aren't much lines. however i can't guarantee that it's really good and clean code if you know what i mean..^^


----------



## brianhama (Jul 13, 2008)

I understand that.  I am asking if you could post some code on how to access shared memory.


----------



## ps3divx.com (Jul 13, 2008)

allright, i will post the documented code, files and visual studio project soon (next 1-3 days), for anyone who needs it..


----------



## W1zzard (Jul 13, 2008)

at this time it is only possible to get the readings for the currently selected video card (in gpuz)


----------



## ps3divx.com (Jul 13, 2008)

hmm i see.. is it possible to extend it to all cores in the next version? that would really help me out. however, i think the whole application is designed to show only one core at a time so it might be a bit complicated to get all data at the same time.. so if that's absolutely not possible could you at least add some kind of command line argument that defines which of the cores is selected at startup?

because then i could start gpuz twice (one instance per core) and the data of both cores is written into (the same) shared memory. i could read the data each 500ms in my app so i get both values.. a bit complicated and dirty but it would serve my needs, if the shared memory for all cores is to complex. 

PS: at the moment i'm pimping my neat-o wrapper class for you guys.. just love it.


----------



## brianhama (Jul 14, 2008)

Thanks PS3Divx; I look forward to your post.

Brian P. Hamachek


----------



## OlafSt (Jul 14, 2008)

Maybe it is a nice idea to have a "GPU Count" value in the shared memory block. Then, for any further GPU, just create another shared-mem-block called "GPUZShMem_2", "GPUZShMem_3" and so on. I think this is the easiest way to accomplish this, as nobody knows how much GPU's we sometime will have on a Graphics card...

BTW, for next release a "minimize to tray"-Feature would be nice 

Greets,

Olaf


----------



## ps3divx.com (Jul 14, 2008)

OlafSt said:


> BTW, for next release a "minimize to tray"-Feature would be nice



*sign* 

I think i will finish the wrapper class today.. if all works well, i will post it in the later afternoon/night.


----------



## JohnnyUT (Jul 15, 2008)

Allright, here we go with the C# wrapper class for GPUZ!  (as i said:  it's the "later" night.. )

With it you can access the shared memory of GPUZ in your C# applications quite convenient.
For example the following lines will print the Temperature and the Core Load on the console:


```
Gpuz.Wrapper.Open()
Console.WriteLine(Gpuz.Wrapper.SensorValue(2));  //temp
Console.WriteLine(Gpuz.Wrapper.SensorValue(4));  //load
Gpuz.Wrapper.Close();
```

Here is the link to the RAR-file (unfortunately it's to big to upload it directly to the board :/):
http://www.file-upload.net/download-980228/GpuzWrapper.rar.html

It includes: 
- 2 DLLs to include and use in your C# application
- the visual studio (2005) project to modify and compile the DLLs on your own
- a visual studio (2005) project which shows the proper usage of the wrapper class
- a small readme file which lists the steps to take if you want to use the wrapper class in your C# application.

Well, i think it's pretty handy and some guys might have a use of this! 
Oh, and of course the whole code is absolutely open source and free to modify and use. 

Enjoy! 

PS: Feel free to ask if something is unclear.


----------



## puma99dk| (Jul 19, 2008)

i just tested out GPU-Z 0.2.6 at my server it got a NVIDIA GeForce 6150SE and the "Graphics Card" fan can't show memory clock but the "Senors" fan can










Validator Link: http://www.techpowerup.com/gpuz/kn7yh/

Motherboard is a Asus M2N-XE: http://uk.asus.com/products.aspx?l1=3&l2=149&l3=595&l4=0&model=1976&modelmenu=1


----------



## boblemagnifique (Sep 11, 2008)

I have same problem with the G45 (Intel DG45FG) !!

The memory isn't detected with the Gpuz 0.27


----------



## Mr.John (Oct 9, 2008)

W1zzard, is it possible to get the GPU count and if sli/crossfire is enabled? What properties should I look at? Sorry, no sli/crossfire to test.

BTW, I'll use GPU-Z in Framebuffer Crysis Warhead Benchmark Tool.


----------



## caldran (Dec 14, 2008)

i need vb code of gpuz ...where can i get it...


----------



## Toolmaster (Feb 16, 2009)

@JohnnyUT

Hast du auch eine x64 GpuzShMem.dll?

grüße toolmaster


----------



## taloche (Feb 16, 2009)

W1zzard said:


> If you use this shared memory in your application please leave a short comment.




Ouppps ! I never see this. i am sorry. 

I use this the share memory in plugin of samurize, here http://www.samurize.com/modules/mydownloads/viewcat.php?cid=6 "A plugin for GPU Z"

This shared memory is a very good feature, a great idee, and very usefull.

Thanks.


----------



## ascl (Sep 24, 2009)

Thanks for publishing this. 

If anyone is interested, I have sample C# code for accessing this posted on my blog here.


----------



## tomoyo (Mar 15, 2010)

Thanks for this interface. Now I have a monitor plugin of rivatuner, making use of the GPU load and other information reported by GPU-Z. It works very well.

I have described some detail in my blog but I'm sorry it was in Chinese.


----------



## thor2002ro (May 2, 2010)

can someone please convert shared memory to delphi?


----------



## SirReal (Jul 5, 2010)

LCDSirReal, or SirReal's multipurpose G15 plugin, is a plugin for the Logitech G13/G15 gaming keyboards. It will also work on the G19, in black and white. It provides more features than all of the Logitech bundled plugins together, while using much less CPU and memory. Written entirely in C++, it's very stable and efficient. Among the more notable features are support for applications (WinAMP, iTunes, SpeedFan, FRAPS, TeamSpeak...) and system real time monitoring of networking, CPU and memory usage and more. And of course it shows the basic stuff as time, date and waiting mail as well.






Changes in 2.8.3:


Adds support for the GPU-Z application
You can now prevent the test window from appearing even if the keyboard is unresponsive, see lcdsirreal.txt for more info

http://www.linkdata.se/


----------



## lastOne (Jul 4, 2011)

- what happens if 2 or more GPUs are present, the structure posted on the OP provide all the info ?
- the OP is rather old, does the OP structure is still up to date ?


----------



## SirReal (Jul 4, 2011)

lastOne said:


> what happens if 2 or more GPUs are present ?
> 
> - the structure posted on the OP provide all the info ?
> - the OP is rather old, does the OP structure is still up to date ?



The structure just contains name-value pairs, and so does not limit itself to one GPU.
For more details, I suggest you contact the author of GPU-Z.


----------



## pengtianlei (Sep 25, 2012)

*how to get free video card memory*



W1zzard said:


> ```
> #define SHMEM_NAME _T("GPUZShMem")
> #define MAX_RECORDS 128
> 
> ...



Hello!I want to know, How can I get how much video card I have used.
My application use C++ Language. I want to get the Free physical video card, like Gpu-Z.
Do you have suggestion?


----------



## Peter1986C (Sep 26, 2012)

pengtianlei said:


> I want to get the Free physical video card, like Gpu-Z



Sorry but what is that supposed to mean?


----------



## pengtianlei (Sep 26, 2012)

*Video Memory*



Chevalr1c said:


> Sorry but what is that supposed to mean?



Thanks for your reply.
I mean, How can I get how much video memory I have used. 

Like Software GPU-Z.(Memory Usage (Dedicated))


----------



## Brusfantomet (Jan 5, 2013)

I am guessing that its not possible to get someone to explain how GPU-z gets the temps, but is it possible for it to start in minimized mode ?.

I am guessing that what SirReal says still holds true, one instance of GPU-z now gives you the temps of all your GPUs


----------



## W1zzard (Jan 6, 2013)

Brusfantomet said:


> I am guessing that its not possible to get someone to explain how GPU-z gets the temps, but is it possible for it to start in minimized mode ?.
> 
> I am guessing that what SirReal says still holds true, one instance of GPU-z now gives you the temps of all your GPUs



There is not a single way, it depends on the card, OS, driver etc.

GPU-Z has a -minimized command line parameter you can use.


----------



## Brusfantomet (Jan 6, 2013)

ok, thank you


----------



## Ganesh_AT (Dec 18, 2013)

*Edit: W1zzard has already answered this in the other thread here: *http://www.techpowerup.com/forums/t...-gpu-z-shared-memory-update-frequency.195840/

I am using the shared memory [ currently with a DLL built with the code here: https://github.com/JohnnyUT/GpuzShMem , but have a newer version coming up with that dependency removed, and using some code modified from here: http://www.techpowerup.com/forums/threads/gpu-z-shared-memory-class-in-c.164244/#post-2605403 ] in my freeware project 'Remote Sensor Monitor'.

Details about the project can be found here:

http://www.hwinfo.com/forum/Thread-Introducing-Remote-Sensor-Monitor-A-RESTful-Web-Server

I am posting in this thread with reference to some 'discrepancies' I observed in the shared memory update frequency. I was assuming it could be polled every second. I have a Perl script running on a client machine accessing the GPU-Z shared memory values over HTTP based on this polling interval. A sample debug output of that script is below. The values in the CSV file lines are: 'lastUpdateTime' from GPU-Z shared memory converted to 'seconds' from 'milliseconds', GPU load and GPU power consumption. The line next to it has parameters from the client machine: Elapsed refers to the amount of time in seconds between the time the request to the shared memory was sent over the network to the time it took for the data to come back (this includes network delays etc.). Usually, it is between 10ms to 250ms, skewed towards lower delays. The SleepInterval number corresponds to the time after which the next request to the shared memory is placed on the network.



Spoiler





```
Enqueueing to CSV File : , 331202, 0.002799, 0.046480
Elapsed: 0.022736, SleepInterval: 0.977264
Enqueueing to CSV File : , 331202, 0.002799, 0.046480
Elapsed: 0.015493, SleepInterval: 0.984507
Enqueueing to CSV File : , 331204, 0.001460, 0.047216
Elapsed: 0.025459, SleepInterval: 0.974541
Enqueueing to CSV File : , 331204, 0.001460, 0.047216
Elapsed: 0.010567, SleepInterval: 0.989433
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.01458, SleepInterval: 0.98542
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.056981, SleepInterval: 0.943019
Enqueueing to CSV File : , 331207, 0.000000, 0.000000
Elapsed: 0.097846, SleepInterval: 0.902154
Enqueueing to CSV File : , 331209, 0.001490, 0.046605
Elapsed: 0.104709, SleepInterval: 0.895291
Enqueueing to CSV File : , 331209, 0.001490, 0.046605
Elapsed: 0.019662, SleepInterval: 0.980338
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.016406, SleepInterval: 0.983594
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.018795, SleepInterval: 0.981205
Enqueueing to CSV File : , 331212, 0.000000, 0.000000
Elapsed: 0.018195, SleepInterval: 0.981805
Enqueueing to CSV File : , 331214, 0.000000, 0.000000
Elapsed: 0.019669, SleepInterval: 0.980331
Enqueueing to CSV File : , 331214, 0.000000, 0.000000
Elapsed: 0.019537, SleepInterval: 0.980463
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.027269, SleepInterval: 0.972731
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.020597, SleepInterval: 0.979403
Enqueueing to CSV File : , 331217, 0.000000, 0.000000
Elapsed: 0.019204, SleepInterval: 0.980796
Enqueueing to CSV File : , 331219, 0.000000, 0.000000
Elapsed: 0.224437, SleepInterval: 0.775563
Enqueueing to CSV File : , 331219, 0.000000, 0.000000
Elapsed: 0.022575, SleepInterval: 0.977425
Enqueueing to CSV File : , 331222, 0.000000, 0.000000
Elapsed: 0.022621, SleepInterval: 0.977379
Enqueueing to CSV File : , 331222, 0.000000, 0.000000
```




I would expect the update time provided by GPU-Z in the shared memory to closely follow 1 second skips, but I see the parameters being repeated multiple times and the update time skipping by 2 or 3 seconds. I am wondering why I am unable to access updated values on a more frequent basis. I did face a similar issue (though not this much skew) with HWiNFO, which I solved by setting the scan interval to 900ms in the HWiNFO software. Any similar feature (or, any way to ensure I can read GPU-Z shared memory updated values every second) would be awesome to have.


----------



## eFMer (Dec 29, 2013)

Is the shared memory on the first page still valid.

I'm added GPUZ to TThrottle http://www.efmer.eu/boinc/

GPUZ_RECORD data[MAX_RECORDS]; seem to align properly.

The following record    GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];

Starts at address xxxx 10  according to the debugger.
name    0x0000000002270010 "U Core Clock"    wchar_t [256]

But it's actually starting at 0x000000000227000B

I can move the pointer 4 bytes back, but I like to know why. The only thing I can think of is that the data structure is somehow different, but that doesn't explain why the date is aligned as it should.


```
#define MAX_RECORDS 128

typedef struct GPUZ_RECORD
{
    WCHAR key[256];
    WCHAR value[256];
}GPUZ_RECORD;

typedef struct GPUZ_SENSOR_RECORD
{
    WCHAR name[256];
    WCHAR unit[8];
    UINT32 digits;
    double value;
}GPUZ_SENSOR_RECORD;

typedef struct GPUZ_SH_MEM
{
    UINT32 version;            // Version number, 1 for the struct here
    volatile LONG busy;        // Is data being accessed?
    UINT32 lastUpdate;        // GetTickCount() of last update
    GPUZ_RECORD data[MAX_RECORDS];
    GPUZ_SENSOR_RECORD sensors[MAX_RECORDS];
}GPUZ_SH_MEM, *LPGPUZ_SH_MEM;
```

It seems the alignment fails on the double value;

I changed it to     BYTE bValue[8];
That holds the 64 bit double.


```
double dValue;
    memcpy(&dValue, lpMem->sensors[2].bValue, sizeof dValue);
```

This generates the right double, a bit awkward, but Visual studio aligns the whole block when using a double.


----------



## Beemer Biker (Jun 22, 2019)

lastOne said:


> - what happens if 2 or more GPUs are present, the structure posted on the OP provide all the info ?
> - the OP is rather old, does the OP structure is still up to date ?



I was able to read three 1070Ti NVidia devices (2 dell, one non dell by bringing up three instances of GPU-z and selecting a different GPU for each instance  My app (I am working on just for my own usage)  then counted the number of gpus available using 


```
theprocess.MainWindowTitle.Contains("TechPowerUp GPU-Z"))
```

I then "read" from the memory map for each occurrence of that app using example code from ascl's blog ( #22 post Thank Asci!) 

```
data = (GPUZ_SH_MEM)Marshal.PtrToStructure(map, typeof(GPUZ_SH_MEM));
```

I verified that my two dell IDs were listed and that one non-dell ID  in my list of the "data" items.  I also set the three instances of GPU-z for the same non-dell GPU and my app did not find any Dell device IDs in the 3 results returned.  I assume this is how more than one device can be read.  I wish to plot GPU load usage of multiple GPUs on the same graph   Alternately, I suspect I can simply log each app to disk and combine several logs into my chart which is far easier as I don't need real time display.


----------



## Beemer Biker (Jun 25, 2019)

Follow-up: (did not see how to edit previous post)
More testing showed that each "read" from the memory map was just as likely to get GPU#1 three times in a row as it was to get GPUs 1, 2 and 3 respectively on a 3 GPU system with three instance of GPUz running.  In addition I was unable to spot anything in the "DATA" record that could be used to identify which of two identical Dell gtx1070 boards the sensor record belonged to but I didn't spend a lot of time looking at that.  A search for the name of the memory map in both the binary and the "in memory" image failed as I thought I could edit the binary to name the map differently.   I want to do a performance comparison of nVidia and ATI boards in PCIe slots x16, x8, x1 with risers such as 4-in-1 etc.  It will be easier to log results to a file such as "log_nv0.txt",  "log_nv1.txt", etc for my study.  When I name the log file I know which GPU was mining which project which would be difficult to find looking at data in the memory map.  It would be helpful if the executable had command line options to select a device and give a log name as I could then shell out the GPUz app from my analysis program.  I appreciate getting info on the memory map and I learned quite a bit, the main learning was to stick with C# and use marshaled code and not get into MFC / ATL to access the map natively.  I have been spoiled by c#.


----------

