Beginning directx 9 game programming source code




















Asked 10 years, 3 months ago. Active 7 years, 6 months ago. Viewed 5k times. The official site is dead and I can't seem to find 3 main files used for all the projects. Dennis Ren Ren 1 1 gold badge 7 7 silver badges 12 12 bronze badges. Add a comment. Active Oldest Votes. Dennis Dennis Well, thanks for trying, but this is not the right code. The code I need was on the official website written by Frank Luna. I've also found those, but these ones are not original, they are modified by users, thus not working with only 3 files.

I guess they are not easy to be found. The site you linked to, www. The full list of namespaces included in Managed DirectX is listed in Table 1. Table 1. Managed DirectX Namespaces Microsoft. DirectX Microsoft. Direct3D Parent namespace, holds all common code. DirectDraw graphics API. DirectPlay networking API. DirectSound audio API. DirectInput user input API. Simple audio and video playback API. Simple diagnostics API. Underlying structure for DirectX code access security.

Permission classes for DirectX code access security. DirectDraw Microsoft. DirectPlay Microsoft. DirectSound Microsoft. DirectInput Microsoft. AudioVideoPlayback Microsoft. Diagnostics Microsoft. Security Microsoft. Permissions As you can see, this encompasses most functionality included with DirectX. During the course of this book, we will deal extensively with the Direct3D namespace, but will cover all of the other areas as well.

Before you begin writing the next big game using Managed DirectX, there are a few things you need to have. First, you will need a source code editor and the runtime environment. I would recommend Visual Studio. NET , as that is what Microsoft supports. Regardless of what editor you use, you will need version 1.

NET Next you will need to have the DirectX 9. I would recommend installing the DirectX 9. This will give you the developer runtime, as well as many other samples, and documentation on Managed DirectX. The graphics samples in the first few chapters should run on just about any modern graphics card. However, later samples will require a more advanced video card, capable of using vertex and pixel shaders.

The geForce3 and above should be sufficient for these samples; however, I would recommend a card capable of using the shader model 2. Before we actually delve into Managed DirectX coding, there are some things it will be assumed you know.

If you are new to programming in general, this book probably isn't for you. The book is targeted for developers who already have general programming experience and are looking for information on building rich multimedia applications using Managed DirectX. The code written in the text of this book will be C , but the included CD will also contain versions of the code written in Visual Basic.

The included CD contains the DirectX 9. You will also find the. NET runtime version 1. With these things out of the way, we are ready to start using Managed DirectX.

Now, we just need to figure out how to actually write the code for this next big game. With the processing power of today's modern GPU graphics processing unit , realistic scenes can be rendered in real time.

Have you ever been playing the latest 3D game and wondered, "How do they do that? Managed Direct3D allows developers an easy and fast way to get complex or simple graphics and animations on the screen. To accomplish this, follow these steps: 1. The first thing we'll want to do will be to load Visual Studio.

NET and create a new project. Let's select the Visual C projects area, and create a new Windows Application project. We will need a place to do our rendering, and the standard Windows Form that comes with this project works perfectly.

Name the project whatever you like and create the new project. After the project is created, we will need to make sure the Managed DirectX references are added to the project so we can use the components. Click the Add References menu selection in the project menu, and add the Microsoft.

DirectX as well as the Microsoft. Direct3D reference. For now, that's all we'll need. With the references now loaded into the project, we could get right into writing some Managed DirectX code, but before we do that, we should add two new items into our "using" clauses so we don't need to fully qualify everything we'll be using.

You can do this by opening the code window for the main windows form in your application by default form1. DirectX; using Microsoft. Direct3D; While this step isn't necessary, it saves a lot of typing, since you won't need to fully qualify each item before using it. Now, we are ready to start writing our first Managed DirectX application. You can think of this class as analogous to the actual graphics device in your computer. A device is the parent of all other graphical objects in your scene.

Your computer may have zero or many devices, and with Managed Direct3D, you can control as many of them as you need. There are three constructors that can be used to create a device.

For now, we're only going to use one of them; however we will get to the others in later chapters. The one we're concerned about takes the following arguments: public Device System. Int32 adapter , Microsoft. DeviceType deviceType , System.

Control renderWindow , Microsoft. CreateFlags behaviorFlags, Microsoft. PresentParameters presentationParameters Now, what do all of these arguments actually mean, and how do we use them? Well, the first argument "adapter" refers to which physical device we want this class to represent.

Each device in your computer has a unique adapter identifier normally 0 through one less than the number of devices you have. Adapter 0 is always the default device. This can be used if you have a need to work with an unmanaged application from your managed code.

The next argument, DeviceType, tells Direct3D what type of device you want to create. The most common value you will use here is DeviceType. Hardware, which means you want to use a hardware device. Another option is DeviceType. Reference, which will allow you to use the reference rasterizer, which implements all features of the Direct3D runtime, and runs extremely slowly.

You would use this option mainly for debugging purposes, or to test features of your application that your card doesn't support. The "renderWindow" argument binds a window to this device. Since the windows forms control class contains a window handle, it makes it easy to use a derived class as our rendering window. You can use a form, panel, or any other control-derived class for this parameter. For now, we'll use forms. The last value is DeviceType.

Software, which will allow you to use a custom software rasterizer. Unless you know you have a custom software rasterizer, ignore this option.

The next parameter is used to control aspects of the device's behavior after creation. Most of the members of the CreateFlags enumeration can be combined to allow multiple behaviors to be set at once. Some of these flags are mutually exclusive, though, and I will get into those later. We will only use the SoftwareVertexProcessing flag for now. This flag specifies that we want all vertex processing to happen on the CPU.

While this is naturally slower than having the vertex processing happen on the GPU, we don't know for sure whether or not your graphics card supports this feature. It's safe to assume your CPU can do the processing for now. All we did was draw a colored triangle inside a window, which could just as easily have been done with GDI. So, how do we actually draw something in 3D and have it look a little more impressive?

Actually, it's relatively simple to modify our existing application to accommodate us. If you remember, earlier when we were first creating the data for our triangle, we used something called transformed coordinates.

These coordinates are already known to be in screen space, and are easily defined. What if we had some coordinates that weren't already transformed though? These untransformed coordinates make up the majority of a scene in a modern 3D game.

When we are defining these coordinates, we need to define each vertex in world space, rather than screen space. You can think of world space as an infinite three-dimensional Cartesian space. You can place your objects anywhere in this "world" you want to. Let's modify our application to draw an untransformed triangle now. We'll first change our triangle data to use one of the untransformed vertex format types; in this case, all we really care about is the position of our vertices, and the color, so we'll choose CustomVertex.

Change your triangle data code to the following: CustomVertex. PositionColored[3]; verts[0]. SetPosition new Vector3 0.

ToArgb ; verts[1]. SetPosition new Vector3 ToArgb ; verts[2]. SetPosition new Vector3 1. ToArgb ; And change your VertexFormat property as well: device. Format; Now what's this? If you run the application, nothing happens; you're back to your colored screen, and nothing else.

Before we figure out why, let's take a moment to describe the changes. As you can see, we've switched our data to use the PositionColored structure instead. This structure will hold the vertices in world space as well as the color of each vertex. Since these vertices aren't transformed, we use a Vector3 class in place of the Vector4 we used with the transformed classes, because transformed vertices do not have an rhw component.

The members of the Vector3 structure map directly to the coordinates in world space; x, y, and z. We also need to make sure Direct3D knows we've changed the type of data we are drawing, so we change our fixed function pipeline to use the new untransformed and colored vertices by updating the VertexFormat property.

So why isn't anything displayed when we run our application? The problem here is that while we're now drawing our vertices in world space, we haven't given Direct3D any information on how it should display them. We need to add a camera to the scene that can define how to view our vertices. In our transformed coordinates, a camera wasn't necessary because Direct3D already knew where to place the vertices in screen space.

The camera is controlled via two different transforms that can be set on the device. Each transform is defined as a 4x4 matrix that you can pass in to Direct3D. The projection transform is used to define how the scene is projected onto the monitor.

One of the easiest ways to generate a projection matrix is to use the PerspectiveFovLH function on the Matrix class. This will create a perspective projection matrix using the field of view, in a left-handed coordinate system. What exactly is a left-handed coordinate system anyway, and why does it matter? In most Cartesian 3D coordinate systems, positive x coordinates move toward the right, while positive y coordinates move up. The only other coordinate left is z.

Managed DirectX is smart enough to realize when you've created your device with a Windows Form control, and can automatically reset the device for you when this control is resized.

Naturally, you can revert back to the normal behavior and handle device resets yourself quite easily. There is an event attached to the device called "DeviceResizing", which occurs right before the automatic device reset code. By capturing this event, and setting the Cancel member of the EventArgs class passed in to true, you can revert to the default behavior. Now, we need to hook up this event handler to our device so it knows not to perform this action.

Add the following line of code immediately after the device creation: device. CancelResize ; Now run the application once more and after it starts, maximize the window.

As you can easily see, the triangle, while still there and in the same spot, looks horrible. The edges are jagged, and it's just not very nice looking at all. Go ahead and delete the last two sections of code we added. The default behavior of Managed DirectX handles the device resizing for you, and we might as well take advantage of it.

Now that we've got our single triangle drawn and spinning around, what else could make it even better? Why, lights of course! We mentioned lights briefly before, when our triangle turned completely black after we first moved over to our nontransformed triangles.

Actually, we completely turned the lighting off in our scene. The first thing we need to do is to turn that back on, so change the lighting member back to true: device. Running the application now, we see we're back to the black triangle again; it's just spinning around now. Maybe we should go ahead and define a light, and turn it on.

You'll notice the device class has a lights array attached to it, with each member of the array holding the various light properties. We want to define the first light in the scene and turn it on, so let's add the following code into our OnPaint method just after our triangle definition code: device. Point; device. White; device. Commit ; device. First, we define what type of light we want to display. We've picked a point light, which is a light that radiates light in all directions equally, much like a light bulb would.

There are also directional lights that travel in a given direction infinitely. You could think of the sun as a directional light yes, in reality the sun would be a point light since it does give off light in all directions; however, from our perspective on Earth, it behaves more like a directional light.

Directional lights are only affected by the direction and color of the light and ignore all other light factors such as attenuation and range , so they are the least computationally expensive light to use.

The last light type would be the spot light, which like its name is used to define a spot light, much like you would see at a concert, highlighting the person on stage. Given the large number of factors in making up the spot light position, direction, cone angle, and so on , they are the most computationally expensive lights in the system.

With that brief discussion on light types out of the way, we'll continue. Next we want to set the position of our point light source. Since the center of our triangle is at 0, 0, 0, we may as well position our light there as well.

The parameterless constructor of Vector3 does this. We set the diffuse component of the light to a white color, so it will light the surface normally. We set the first attenuation property, which governs how the light intensity changes over distance. The range of the light is the maximum distance the light has any effect. Our range far exceeds our needs in this case. Finally, we commit this light to the device, and enable it.

If you look at the properties of a light, you will notice one of them is a Boolean value called "Deferred". By default, this value is false, and you are therefore required to call Commit on the light before it is ready to be used. Setting this value to true will eliminate the need to call Commit, but does so at a performance penalty.

Always make sure your light is enabled and committed before expecting to see any results from it. Well, if you've run your application once more, you'll notice that even though we've got our light defined in the scene now, the triangle is still black. If our light is on, yet we see no light, Direct3D must not be lighting our triangle, and in actuality, it isn't. Well, why not? Lighting calculations can only be done if each face in your geometry has a normal, which currently we don't have.

What exactly is a normal, though? A normal is a perpendicular vector pointing away from the front side of a face; see Figure 1. Figure 1. Vertex normals. There are three different state variables on the device: the render states, the sampler states, and the texture states. We've only used some of the render states thus far; the latter two are used for texturing. Don't worry; we'll get to texturing soon enough.

The render state class can modify how Direct3D will do its rasterization of the scene. There are many different attributes that can be changed with this class, including lighting and culling, which we've used in our application already.

Other options you can set with these render states include fill mode such as wire frame mode and various fog parameters. We will discuss more of these options in subsequent chapters. As mentioned before, transformations are matrices used to convert geometry from one coordinate space to another. The three major transformations used on a device are the world, view, and projection transforms; however, there are a few other transforms that can be used.

There are transforms that are used to modify texture stages, as well as up to world matrices. Well, there are a few things implicit on a device that handle where and how items are drawn. Each device has an implicit swap chain, as well as a render target.

A swap chain is essentially a series of buffers used to control rendering. There is a back buffer, which is where any drawing on this swap chain occurs. When a swap chain that has been created with SwapEffect. Flip is presented, the back buffer data is "flipped" to the front buffer, which is where your graphics card will actually read the data. At the same time, a third buffer becomes the new back buffer, while the previous front buffer moves to the unused third buffer.

See Figure 1. Back buffer chain during flips. A true "flip" operation is implemented by changing the location of where the video card will read its data and swapping the old one with the current back buffer location. For DirectX 9, this term is used generically to indicate when a back buffer is being updated as the display.

In windowed mode, these flip operations are actually a copy of the data, considering our device isn't controlling the entire display, but instead just a small portion of it. The end result is the same in either case, though. In full screen mode, using SwapEffect. Flip, the actual flip occurs; some drivers will also implement SwapEffect. Discard or SwapEffect. Copy with a flip operation in full screen mode. If you create a swap chain using SwapEffect.

Copy or SwapEffect. Flip, it is guaranteed that any present call will not affect the back buffer of the swap chain. The runtime will enforce this by creating extra hidden buffers if necessary. It is recommended that you use SwapEffect. Discard to avoid this potential penalty. This mode allows the driver to determine the most efficient way to present the back buffer. It is worth noting that when using SwapEffect. Discard you will want to ensure that you clear the entire back buffer before starting new drawing operations.

The runtime will fill the back buffer with random data in the debug runtime so developers can see if they forget to call clear. The back buffer of a swap chain can also be used as a render target.

Implicitly, when your device is created, there is a swap chain created, and the render target of the device is set to that swap chain's back buffer. A render target is simply a surface that will hold the output of the drawing operations that you perform.

If you create multiple swap chains to handle different rendering operations, you will want to ensure that you update the render target of your device beforehand. We will discuss this more in later chapters. Spinning 3D triangle.

In our next chapter we'll look at how we can find out which options our device can support, and create our devices accordingly. With all of the varying types of graphics cards available in the market today, knowing which features each of them support is simply not feasible.

You will need to query the device itself to find out the features it supports. While it hasn't quite hit the "mainstream," having multiple monitors multimon for short can be quite useful, and is rapidly becoming more popular. With the higher-end graphics card supporting multimon natively, you can expect this feature to be used more exclusively. The latest cards from ATI, nVidia, and Matrox all have dual head support, which can allow multiple monitors to be attached to a single graphics card.

Direct3D devices are specified per adapter. In this case, you can think of an "adapter" as the actual graphics hardware that connects to the monitor.

You may not know which, or even how many, devices are on a system your game will be running on, so how can you detect them and pick the right one? There is a static class in the Direct3D assemblies called "Manager" that can easily be used to enumerate adapters and device information, as well as retrieving the capabilities of the devices available on the system.

The very first property in the Manager class is the list of adapters in your system. This property can be used in multiple ways. It stores a "count" member that will tell you the total number of adapters on the system. Each adapter can be indexed directly for example, Manager. Adapters[0] , and it can also be used to enumerate through each of the adapters on your system.

To demonstrate these features, a simple application will be written that will create a tree view of the current adapters on your system and the supported display modes that adapter can use. Load Visual Studio and follow these steps: 1. Create a new C Windows Forms Project. You can name it anything you want; the sample code provided was named "Enumeration". Add a reference to the Microsoft. Direct3D and Microsoft. DirectX assemblies.

Add using clauses for these assemblies as well. In design view for the created windows form, add a TreeView control to your application. You will find the TreeView control in the toolbox. Select the TreeView control on your form, and hit the F4 key or right-click and choose Properties. In the properties of your TreeView, set the "Dock" parameter to "Fill". This will cause your TreeView to always fill up the entire window, even if it is resized. Now, you should add a function that will scan through every adapter on the system, and provide a little information on the modes each one can support.

DriverName, ai. For example, if you wanted to determine whether or not your device supported a particular format, and didn't want to enumerate all possible adapters and formats, you could use the manager class to make this determination. The following function can be used: public static System. Boolean CheckDeviceType System. Int32 Microsoft. DeviceType checkType , Microsoft. Format displayFormat , Microsoft. Format backBufferFormat , System. Boolean windowed , System. Int32 result adapter , This can be used to determine quickly whether your device supports the type of format you wish to use.

The first parameter is the adapter ordinal you are checking against. The second is the type of device you are checking, but this will invariably be DeviceType. Hardware the majority of the time. Finally, you specify the back buffer and display formats, and whether or not you want to run in windowed or full screen mode.

The method will return true if this is a valid device type, and false otherwise. While the CheckDeviceType method will return appropriate results regardless of whether or not this support is available, you can also use the CheckDeviceFormat Conversion method off of the Manager class to detect this ability directly.

Full-screen applications cannot do color conversion at all. You may also use Format. Unknown in windowed mode. This is quite useful if you know beforehand the only types of formats you will support.

There isn't much of a need to enumerate through every possible permutation of device types and formats if you already know what you need.

There is a Caps structure in the Direct3D runtime that lists every possible capability a device can have. Once a device is created, you can use the "Caps" property of the device to determine the features it supports, but if you want to know the features the device can support before you've created it, then what? Naturally, there is a method in the Manager class that can help here as well. Now to add a little code to our existing application that will get the capabilities of each adapter in our system.

We can't add the list of capabilities to the current tree view because of the sheer number of capabilities included there are hundreds of different capabilities that can be checked.

The easiest way to show this data will be to use a text box. Go ahead and go back to design view for our windows form, and switch the tree view's dock property from "Fill" to "Left". It's still the entire size of the window though, so cut the width value in half. Now, add a text box to the window, and set its dock member to "Fill". Also make sure "Multiline" is set to true and "Scrollbars" is set to "Both" for the text box.

Now you will want to add a hook to the application so that when one of the adapters is selected, it will update the text box with the capabilities of that adapter. You should hook the "AfterSelect" event from the tree view I will assume you already know how to hook these events. GetDeviceCaps e. Index, DeviceType. For the root nodes which happen to be our adapters , after they are selected we call the Manager.

GetDeviceCaps function, passing in the adapter ordinal for the node which happens to be the same as the index. The ToString member of this structure will return an extremely large list of all capabilities of this device. Running the application now will result in something like Figure 2.

Figure 2. Device display mode and capabilities. New lists of vertices were allocated every time the scene was rendered, and everything was stored in system memory. With modern graphics cards having an abundance of memory built into the card, you can get vast performance improvements by storing your vertex data in the video memory of the card. Having the vertex data stored in system memory requires copying the data from the system to the video card every time the scene will be rendered, and this copying of data can be quite time consuming.

Removing the allocation from every frame could only help as well. A vertex buffer, much like its name, is a memory store for vertices. The flexibility of vertex buffers makes them ideal for sharing transformed geometry in your scene.

So how can the simple triangle application from Chapter 1, "Introducing Direct3D," be modified to use vertex buffers? Creating a vertex buffer is quite simple. There are three constructors that can be used to do so; we will look at each one. Device device , System. Usage usage , Microsoft.

VertexFormats vertexFormat , Microsoft. Pool pool public VertexBuffer System. Type typeVertexType , System. Device device , Microsoft. The vertex buffer will only be valid on this device. If you are using the constructor with this parameter, the buffer will be able to hold any type of vertex. This can be the type of one of the built-in vertex structures in the CustomVertex class, or it can be a user defined vertex type.

This value cannot be null. This value must be greater than zero. Not all members of the Usage type can be used when creating a vertex buffer. The following values are valid: o DoNotClip— Used to indicate that this vertex buffer will never require clipping. You must set the clipping render state to false when rendering from a vertex buffer using this flag. If this flag isn't specified, the vertex buffer is static. Static vertex buffers are normally stored in video memory, while dynamic buffers are stored in AGP memory, so choosing this option is useful for drivers to determine where to store the memory.

The term "texture" when describing non-3D applications is usually in reference to the roughness of an object. Textures in a 3D scene are essentially flat 2D bitmaps that can be used to simulate a texture on a primitive. You might want to take a bitmap of grass to make a nice looking hill, or perhaps clouds to make a sky. Direct3D can render up to eight textures at a time for each primitive, but for now, let's just deal with a single texture per primitive.

Since Direct3D uses a generic bitmap as its texture format, any bitmap you load can be used to texture an object. How is the flat 2D bitmap converted into something that is drawn onto a 3D object, though? Each object that will be rendered in a scene requires texture coordinates, which are used to map each texel to a corresponding pixel on screen during rasterization.

The name of our vertex data structure is important to describe the type of data that it will contain. When your project gets bigger, you may need to define different vertex data types with different member variables.

If you give your vertex data structure meaningful names, you will increase the chance that the data structure will be reused and reduce the chance that multiple vertex data structures that contain the same member variables will be declared. In this case we create a vertex data structure called VertexXYZColor so that we know it contains 3 components to store the X , Y , and Z position followed by a Color component. The static member variable called VertexFormat is used describe the data that this vertex structure contains.

This particular vertex type only contains the X, Y, Z for position followed by the diffuse color data that will be applied to the vertex. It is worth noting that the order of the FVF bitfield is not really important, but what is very important is the order in which the member variables are declared in the vertex struct. If you are unsure what order these member variables must appear in, just take a look at the d3d9types. In general, the members of the vertex will be declared in the following order:.

We will also declare some helpful constants that describe a few common colors that we will use for our vertex data. The alpha component is implicitly set to fully opaque. Our cube has 8 unique vertices and we will use a different color for each vertex. The values of the index buffer represent the index of the vertex in the vertex buffer that is used to render the geometry. Each 3-tuple represents a single triangle in the mesh. For our cube, there are 6 faces quads and each face requires 2 triangles and each triangle needs 3 vertices to be rendered for a total of 36 indices.

These first two methods will be used to initialize the render window and the DirectX device respectively. Each method returns a Boolean value that indicates whether the method successful or not. The Setup method will be used to setup our vertex buffer and index buffer that will be used to render the cube. This method will also setup the camera matrix and the projection matrix that are used to determine how we will view the scene. For this demo, these values only need to be set once so this method is a perfect place to do that.

I will discuss the message loop in more detail later. And when everything is finished and we close the application, the Cleanup method is used to release references to our COM objects and cleanup allocated dynamic memory.

This is a declaration for the famous message processor that is associated to a particular window class. I will explain this method in more detail when we look at the definition. The WinMain method is the main entry point for our application.

This is where it all begins. This method is the beginning and the end of our application. The first thing we do when we start our Windows application is create a window that will be used to render our graphics onto. This is done in the InitWindowApp method on line If something goes wrong with our window creation method, this function will return false and a message box will be displayed to the user.

If our window creation was successful we will then initialize the DirectX device using the InitDirectX method on line Again, if something should fail we will notify the user using a message box. This method will also be used to setup the initial camera view and projection matrix. The Run method on line will start our main message loop. This function will not return until the user quits the application at which time the error code will be returned. Before we close the application, we also want to make sure that our DirectX resources are properly released so that the allocated buffer memory is returned back to the system heap for use by other applications.

For this we use the Cleanup method. Before we can create a new window for our application, we first have to describe the window we want to create. Since it is possible that our application can have multiple windows each with a different description, we need to create a window class and register it with the Windows system.

This structure has several members that are used to describe the window class. Now that we have the definition of our window class, we use the RegisterClassEx method to register our window class with the windows system.

If there is something wrong with our window class definition, this method will fail in which case we will display a message box to the user notifying them of the problem. If the window is successfully created, it will return a valid handle to the newly created window. If it failed, NULL will be returned in which case we will display a message box to the user indicating the error.

To ensure the window is visible we will call the ShowWindow method passing along the show parameter. In our case the windows update region is empty which means this function does nothing. The Run method is where our main game loop will be handled. This is where we will query for new messages that are waiting on the message queue and update the main game logic when there are no new messages to receive.

The first thing we do in the Run method is declare a MSG variable and initialize it to 0. Since we want to know how much time has elapsed between frames, we will initialize our timer variables to the current time using the timeGetTime timer function.

This function will return the elapsed system time since Windows was started in milliseconds. This function will only be available if you included the winmm.



0コメント

  • 1000 / 1000