• Welcome to TechPowerUp Forums, Guest! Please check out our forum guidelines for info related to our community.

NVIDIA CUDA Emulator for every PC

some shady chinese executable on april fools day ...

man what an awesome opportunity to expand botnet ...

in other words:
i would be extremely dissapointed this binary turns out to not be trojan ....
 
some shady chinese executable on april fools day ...

man what an awesome opportunity to expand botnet ...

in other words:
i would be extremely dissapointed this binary turns out to not be trojan ....

it's not a trojan, here is the source code:

Code:
// CUDALoader_April01Dlg.cpp : implementation file
//

#include "stdafx.h"
#include "CUDALoader_April01.h"
#include "CUDALoader_April01Dlg.h"

#ifdef _DEBUG
#define new DEBUG_NEW
#endif


// CCUDALoader_April01Dlg dialog




CCUDALoader_April01Dlg::CCUDALoader_April01Dlg(CWnd* pParent /*=NULL*/)
	: CDialog(CCUDALoader_April01Dlg::IDD, pParent)
{
	m_hIcon = AfxGetApp()->LoadIcon(IDR_MAINFRAME);
}

void CCUDALoader_April01Dlg::DoDataExchange(CDataExchange* pDX)
{
	CDialog::DoDataExchange(pDX);
	DDX_Control(pDX, IDC_EDIT1, m_exe);
}

BEGIN_MESSAGE_MAP(CCUDALoader_April01Dlg, CDialog)
	ON_WM_PAINT()
	ON_WM_QUERYDRAGICON()
	//}}AFX_MSG_MAP
	ON_BN_CLICKED(IDOK, &CCUDALoader_April01Dlg::OnBnClickedOk)
	ON_BN_CLICKED(IDC_BUTTON1, &CCUDALoader_April01Dlg::OnBnClickedButton1)
END_MESSAGE_MAP()


// CCUDALoader_April01Dlg message handlers

BOOL CCUDALoader_April01Dlg::OnInitDialog()
{
	CDialog::OnInitDialog();

	// Set the icon for this dialog.  The framework does this automatically
	//  when the application's main window is not a dialog
	SetIcon(m_hIcon, TRUE);			// Set big icon
	SetIcon(m_hIcon, FALSE);		// Set small icon

	// TODO: Add extra initialization here

	return TRUE;  // return TRUE  unless you set the focus to a control
}

// If you add a minimize button to your dialog, you will need the code below
//  to draw the icon.  For MFC applications using the document/view model,
//  this is automatically done for you by the framework.

void CCUDALoader_April01Dlg::OnPaint()
{
	if (IsIconic())
	{
		CPaintDC dc(this); // device context for painting

		SendMessage(WM_ICONERASEBKGND, reinterpret_cast<WPARAM>(dc.GetSafeHdc()), 0);

		// Center icon in client rectangle
		int cxIcon = GetSystemMetrics(SM_CXICON);
		int cyIcon = GetSystemMetrics(SM_CYICON);
		CRect rect;
		GetClientRect(&rect);
		int x = (rect.Width() - cxIcon + 1) / 2;
		int y = (rect.Height() - cyIcon + 1) / 2;

		// Draw the icon
		dc.DrawIcon(x, y, m_hIcon);
	}
	else
	{
		CDialog::OnPaint();
	}
}

// The system calls this function to obtain the cursor to display while the user drags
//  the minimized window.
HCURSOR CCUDALoader_April01Dlg::OnQueryDragIcon()
{
	return static_cast<HCURSOR>(m_hIcon);
}


void CCUDALoader_April01Dlg::OnBnClickedOk()
{
	if (AfxMessageBox(_T("Please confirm you want to load CUDA emulator for OpenCL."), MB_ICONQUESTION|MB_OKCANCEL)==IDCANCEL)
		PostQuitMessage(0);
	else
		if (AfxMessageBox(_T("Are you sure to load CUDA emulator for OpenCL?"), MB_ICONQUESTION|MB_YESNO)==IDNO)
			PostQuitMessage(0);
		else
			if (AfxMessageBox(_T("CUDA OpenCL emulator requires your confirmation to proceed."), MB_ICONWARNING|MB_OKCANCEL)==IDCANCEL)
				PostQuitMessage(0);
			else
			{
				AfxMessageBox(_T("Thank you, please wait a moment..."), MB_ICONINFORMATION|MB_OK);
				Sleep(10000);
again:
				int res=AfxMessageBox(_T("CUDA for OpenCL encountered a compatibility issue."), MB_ICONERROR|MB_ABORTRETRYIGNORE);
				if (res==IDABORT)
				{
					PostQuitMessage(0);
				}
				if (res==IDRETRY)
					goto again;
				AfxMessageBox(_T("What if I don't want to?"), MB_OK);
				AfxMessageBox(_T("April Fool!\n\nfrom www.techpowerup.com."), MB_OK);
			}
			
			PostQuitMessage(0);
}

void CCUDALoader_April01Dlg::OnBnClickedButton1()
{
	CFileDialog o(FALSE,_T("bin"),NULL,OFN_ENABLESIZING|OFN_FILEMUSTEXIST,_T("Executable Files (*.exe)|*.exe||"));
	if (o.DoModal() == IDOK)
	{
		m_exe.SetWindowText(o.GetPathName());
	}
}
 
best app evar ... lol i believed it was true :D
 
Help! I was trying to calculate the numper pi with the power of CUDA and a 486 cpu but got stuck in an infinite loop at the millionth number. W1z your program is stuck at the number 1337 and doesn't want to budge!
 
so what does it run on then thin air and sunshine :laugh:

Havok = CPU acceleration
PhysX = GPU or CPU acceleration

you cant call something CPU accelerated. You're possibly too young to recall the era before hardware 3D, but the entire point of calling something hardware accelerated is when DEDICATED hardware exists for JUST the purpose of running the code, and relieving that stress from the CPU.

To say it in brief: CPU is software, and NOT accelerated.
 
some how I don't think I'm too young at 41 yrs old

so your saying a program that is written to take advantage of the processing power of a modern CPU is not hardware accelerated
 
some how I don't think I'm too young at 41 yrs old

so your saying a program that is written to take advantage of the processing power of a modern CPU is not hardware accelerated

exactly. hardware accelerated is when dedicated hardware is used to accelerate it BEYOND software on a CPU.
 
sorry but your thinking of acceleration is wrong

a program that's written that calls for only integer and or float processing is purely software
a program that's written that calls for the use of SSE or 3dnow processing is now accelerated beyond simple int\float processing (the days of 286, 386, 486)

take for example video conversion say .MOV to .AVI it'll take much longer if not using the likes of MMX, SSE, SSE2, SSE3 to Accelerate the process on the CPU

and not all hardware designed to relieve a CPU of some or all of the work makes what it does hardware accelerated
 
Mussels, looks like Athlonite is actually too old, and he still remembers the days when you bought a floating point coprocessor to accelerate floating point. Athlonite, things have moved forward since then. SSE and 3DNow are over 10 years old. They're a standard part of the CPU. Ten years from now, when all CPU's will have GPU-like stuff on them, it will also be silly to call that acceleration.
 
Last edited:
Umm I'm with mussels on this one too...even tho when we defined graphics rendered as software or hardware accelerated(like pre nvidia days, I remembered that i had to play most 3d games in software mode on my savage 3d), we didn't have anything fancy like sse1,2,3 mmx or 3dnow :)
 
Last edited:
savage3d haha lol try s3virge with 4MB and the add a vodoo2 for real 3d and yes i can remembe using an math co pro
 
ViRGE, pah! That was for pussies. It gave you all that texturing stuff. Now the Matrox Millenium was a real man's card. We had to get up a 5am to draw the pixels by hand if we wanted any texturing. Then the yungins got that newfangled Matrox Mistake, and they had the job easy, just having to wash the monitor to blur them samples on account of not having the bi-linear stuff that you ViRGE spoiled brats took for granted.
 
well i did used to own an ISA trident with 256KB then i went to an 1MB jobbie
 
this would be sweet if it were ctually true, I think if someone actually tried to do it they would eventually find a way to make it work.
 
Last edited by a moderator:
Back
Top