26
Applications of 3D today: 3D in Games: 3D in games has come a long way in recent years. It all started in a game called 3D Monster Maze. It was developed by Malcolm Evans in 1981 for the Sinclair ZX81 platform. The game awarded points for each step the player took without getting caught by the Tyrannosaurus Rex that hunted them in the 16 by 16 cell, randomly generated maze. Then we got more advanced 3D graphics from games such as Spyro and Crash Bandicoot: This was when we started to realise the true potential of 3D in games for the future. We were all amazed at what was possible as it was only 1996 and we were getting these amazing graphics. Gaming was evolving at a rate unseen by any other form of media before. The 3d aspect also opened up a lot of new avenues for games designers to create new genres and build upon the tried and tested genres, like fps games, with masterpieces such as Goldeneye. Then we started to get even more updated and improved graphics from games like the Grand Theft Auto series and the Elder Scrolls series.

3D Article

Embed Size (px)

Citation preview

Page 1: 3D Article

Applications of 3D today:

3D in Games:

3D in games has come a long way in recent years. It all started in a game called 3D Monster Maze. It was developed by Malcolm Evans in 1981 for the Sinclair ZX81 platform. The game awarded points for each step the player took without getting caught by the Tyrannosaurus Rex that hunted them in the 16 by 16 cell, randomly generated maze.

Then we got more advanced 3D graphics from games such as Spyro and Crash Bandicoot:

This was when we started to realise the true potential of 3D in games for the future. We were all amazed at what was possible as it was only 1996 and we were getting these amazing graphics. Gaming was evolving at a rate unseen by any other form of media before. The 3d aspect also opened up a lot of new avenues for games designers to create new genres and build upon the tried and tested genres, like fps games, with masterpieces such as Goldeneye.

Then we started to get even more updated and improved graphics from games like the Grand Theft Auto series and the Elder Scrolls series.

Page 2: 3D Article

 

This is when we were at the height of PS2 and Xbox gaming, but then Microsoft unveiled their new console, the Xbox 360 which would revolutionise the future of console gaming. This is now we how get the hyper realistic graphics of today in games like Far Cry, Red Dead Redemption and Crysis.

Page 3: 3D Article

3D in Movies and TV

Computer-generated imagery (CGI) is the application of computer graphics to create or contribute to images in art, printed media, video games, films, television programs, commercials, simulators and simulation generally.

 The visual scenes may be dynamic or static, and may be 2D, though the term "CGI" is most commonly used to refer to 3D computer graphics used for creating scenes or special effects in films and television. The term computer animation refers to dynamic CGI rendered as a movie. The term virtual world refers to agent-based, interactive environments. Computer graphics software is used to make computer-generated imagery for movies, etc.

Recent availability of CGI software and increased computer speeds have allowed individual artists and small companies to produce professional-grade films, games, and fine art from their home computers. This has brought about an internet subculture with its own set of global celebrities, clichés, and technical vocabulary.

Lots of very popular TV shows and movies, such as Doctor Who and Avengers Assemble have used CGI to create things in the movie which could not be done by humans and they are usually mixed in with live action to make it look more authentic.

Page 4: 3D Article

They can create a real impact as they almost seem real if they are done professionally, like in the background of the Avengers Assemble poster how the city is burning and crumbling it really looks real and adds to the story and setting. Other popular examples of shows that use CGI are Primeval and 24.

Page 5: 3D Article

3D in Animations

With the technology behind creating 3D animations, it’s possible to make photorealistic 3D content that can’t be distinguished from a real photograph or video. 3D animation also provides a high level of control and flexibility which makes the artistic freedom endless. Disney and Pixar are probably the most popular 3d animation makers, but there are lots of freelancers and smaller companies that try to create professional animations. Some examples include vr3 and kurodragon. Some popular animations include A Bugs Life and Toy Story.

3D in Medicine

We brought you the first potentially negative use of 3D printers this morning with the revelation that one can make rare handcuff keys with a simple 3D printer or laser cutter. The technology is still really cool, but it must be used with great responsibility. Well, there’s another use for 3D printers that has a lot of potential to be abused, but also a lot of potential to save lives. The 3D printer revolution has taken hold of Professor Lee Cronin at Glasgow University. He has many interests, but one of his most ambitious involves 3D printers. In an interview with The Guardian, he talks up 3D printers and their potential for revolutionizing the medicine industry. His goal is to create “downloadable chemistry” so that people can print their own medicine at home. Of course, you can already see the problem here. Prescription drug abuse is a major problem in many countries, especially in the U.S. Giving people easy access to those drugs is a potential hazard that must be addressed. Cronin dismisses such a scenario and instead focuses on the benefits such an innovation could have on society.

Page 6: 3D Article

 

 

 

 

3D in Education

Gaia’s 3D Visual Learning Solutions provide an interactive learning experience designed to make teaching easier and learning more fun. All of our 3D solutions are designed to complement the school’s curriculum and improve the ability of students to learn.

Page 7: 3D Article

 

 

 

3D in Architecture

3D architectural design is the final stage of design development that Architects and Interior Designers favour in order to visualise their Architectural drawings and creative design ideas. The emerging 3D images that our professional 3D Architects create can be astonishingly photo-realistic. This 3d technology is used mostly (but not exclusively) by architects, design studios and property developers for a variety of projects and plans. This can include hotel and property redevelopment, home improvements and commercial interior design. However, an increasing number of product designers, Engineers, tradesmen and film studios are turning to 3D professionals for their specialist help in order to bring dynamic solutions to a wide range of situations.

Page 9: 3D Article

 

Displaying 3D Polygon Animations API

An application programming interface (API) is a protocol intended to be used as an interface by software components to communicate with each other. An API may include specifications for routines, data structures, object classes, and variables. An API specification can take many forms, including an International Standard such as POSIX, vendor documentation such as the Microsoft Windows API, the libraries of a programming language, e.g. Standard Template Library in C++ or Java API. Garter predicts that by 2014 75% of Fortune 500 enterprises will open an API.

Direct3D

Direct3D is a low-level API that you can use to draw triangles, lines, or points per frame, or to start highly parallel operations on the GPU.

· Hides different GPU implementations behind a coherent abstraction. But you still need to know how to draw 3D graphics.

· Is designed to drive a separate graphics-specific processor. Newer GPUs have hundreds or thousands of parallel processors.

· Emphasizes parallel processing. You set up a bunch of rendering or compute state and then start an operation. You don't wait for immediate feedback from the operation. You don't mix CPU and GPU operations.

Page 10: 3D Article

OpenGL

OpenGL (Open Graphics Library) is a cross-language, multi-platform API for rendering 2D and 3D computer graphics. The API is typically used to interact with a GPU, to achieve hardware-accelerated rendering. OpenGL was developed by Silicon Graphics Inc. in 1992[4] and is widely used in CAD, virtual reality, scientific visualization, information visualization, flight simulation, and video games. OpenGL is managed by the non-profit technology consortium Khronos Group.

Page 11: 3D Article

Graphics Pipeline

In 3D computer graphics, the terms graphics pipeline or rendering pipeline most commonly refer to the way in which the 3D mathematical information contained within the objects and scenes are converted into images and video. The graphics pipeline typically accepts some representation of a three-dimensional primitive as input and results in a 2D raster image as output. OpenGL and Direct3D are two notable 3d graphic standards, both describing very similar graphic pipelines.

Stages of the graphics pipeline

Per-vertex lighting and shading

Geometry in the complete 3D scene is lit according to the defined locations of light sources, reflectance, and other surface properties. Some (mostly older) hardware implementations of the graphics pipeline compute lighting only at the vertices of the polygons being rendered. The lighting values between vertices are then interpolated during rasterization. Per-fragment or per-pixel lighting, as well as other effects, can be done on modern graphics hardware as a post-rasterization process by means of a shader program. Modern graphics hardware also supports per-vertex shading through the use of vertex shaders.

Clipping

Geometric primitives that now fall completely outside of the viewing frustum will not be visible and are discarded at this stage.

Projection Transformation

Page 12: 3D Article

In the case of a Perspective projection, objects which are distant from the camera are made smaller. This is achieved by dividing the X and Y coordinates of each vertex of each primitive by its Z coordinate (which represents its distance from the camera). In an orthographic projection, objects retain their original size regardless of distance from the camera.

Viewport Transformation

The post-clip vertices are transformed once again to be in window space. In practice, this transform is very simple: applying a scale (multiplying by the width of the window) and a bias (adding to the offset from the screen origin). At this point, the vertices have coordinates which directly relate to pixels in a raster.

Scan Conversion or Rasterisation

Rasterisation is the process by which the 2D image space representation of the scene is converted into raster format and the correct resulting pixel values are determined. From now on, operations will be carried out on each single pixel. This stage is rather complex, involving multiple steps often referred as a group under the name of pixel pipeline.

Texturing, Fragment Shading

At this stage of the pipeline individual fragments (or pre-pixels) are assigned a color based on values interpolated from the vertices during rasterization, from a texture in memory, or from a shader program.

Display

The final colored pixels can then be displayed on a computer monitor or other display.

Geometric Theory

 

Geometric theory:

(vertices; lines; curves; edge; polygons; element; face; primitives; meshes, eg wireframe; coordinate geometry (two-dimensional, three-dimensional); surfaces Mesh construction: box modelling; extrusion modelling; using common primitives, eg cubes, pyramids,cylinders, spheres)

Cartesian Coordinates:

When working with a three dimensional software, we are making a 3D illusional interpretation of something in a flat 2D screen and so every 3D software such as 3Ds Max, Blender, Maya and any other software has a Catesian coordinate system to

Page 13: 3D Article

represent  a geometry in 3D space. Cartesian Coordinate is also used mathematics as well.Rene Discartes was a French mathematician who developed the Cartesian Coordinate system (1637). He did it because he wanted to merge Algebra and Euclidean Geometry together. His work was an important role in mathematics in the development of analytic geometry, calculus and cartography.

2D and 3D Cartesian Coordinate system:As usual in maths, when working with a 2D Cartesian Coordinate there are two axis, X (across the axis) and Y (which goes down) and when the two meet together, it is called a origin.

When using a 3D Cartesian coordinate system (you can find 3D catesian coordinates in 3D softwares like 3DS Max) is that it consists of three axis which is X,Y and Z.

Mesh ConstructionAlthough it is possible to construct a mesh by manually specifying vertices and faces, it is much more common to build meshes using a variety of tools. A wide variety of 3d graphics software packages are available for use in constructing polygon meshes.

 One of the more popular methods of constructing meshes is box modeling, which uses two simple tools:

Page 14: 3D Article

The subdivide tool splits faces and edges into smaller pieces by adding new vertices. For example, a square would be subdivided by adding one vertex in the center and one on each edge, creating four smaller squares.

The extrude tool is applied to a face or a group of faces. It creates a new face of the same size and shape which is connected to each of the existing edges by a face. Thus, performing the extrude operation on a square face would create a cube connected to the surface at the location of the face.

A second common modeling method is sometimes referred to as inflation modeling or extrusion modeling. In this method, the user creates a 2d shape which traces the outline of an object from a photograph or a drawing. The user then uses a second image of the subject from a different angle and extrudes the 2d shape into 3d, again following the shape’s outline. This method is especially common for creating faces and heads. In general, the artist will model half of the head and then duplicate the vertices, invert their location relative to some plane, and connect the two pieces together. This ensures that the model will be symmetrical.

Another common method of creating a polygonal mesh is by connecting together various primitives, which are predefined polygonal meshes created by the modeling environment. Common primitives include:

Cubes

Pyramids

Cylinders

2D primitives, such as squares, triangles, and disks

Specialized or esoteric primitives, such as the Utah Teapot or Suzanne, Blender's monkey mascot.

Spheres - Spheres are commonly represented in one of two ways:

Icospheres are icosahedrons which possess a sufficient number of triangles to resemble a sphere.

UV Spheres are composed of quads, and resemble the grid seen on some globes - quads are larger near the "equator" of the sphere and smaller near the "poles," eventually terminating in a single vertex.

Finally, some specialized methods of constructing high or low detail meshes exist. Sketch based modeling is a user-friendly interface for constructing low-detail models quickly, while 3d scanners can be used to create high detail meshes based on existing real-world objects in almost automatic way. These devices are very expensive, and are generally only used by researchers and industry professionals but can generate high accuracy sub-millimetric digital representations.

3D Development Software

Polygonal modeling - Points in 3D space, called vertices, are connected by line segments to form a polygonal mesh. The vast majority of 3D models today are built

Page 15: 3D Article

as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces using many polygons.

 

Curve modeling - Surfaces are defined by curves, which are influenced by weighted control points. The curve follows (but does not necessarily interpolate) the points. Increasing the weight for a point will pull the curve closer to that point. Curve types include Nonuniform rational B-spline (NURBS), Splines, Patches and geometric primitives

 

Digital sculpting - Still a fairly new method of modeling, 3D sculpting has become very popular in the few short years it has been around. There are currently 3 types of digital sculpting: Displacement, which is the most widely used among applications at this moment, volumetric and dynamic tessellation. Displacement uses a dense model (often generated by Subdivision surfaces of a polygon control mesh) and

Page 16: 3D Article

stores new locations for the vertex positions through use of a 32bit image map that stores the adjusted locations. Volumetric which is based loosely on Voxels has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation. Dynamic tesselation Is similar to Voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high resolution mesh information transferred into displacement data or normal map data if for a game engine.

 

 

http://en.wikipedia.org/wiki/3D_modeling

 

Autodesk 3ds Max, formerly 3D Studio Max, is 3D computer graphics software for making 3D animations, models, and images. It was developed and produced by Autodesk Media and Entertainment. It has modeling capabilities, a flexible plugin architecture and can be used on the Microsoft Windows platform. It is frequently used by video game developers, TV commercial studios and architectural visualization studios. It is also used for movie effects and movie pre-visualization.In addition to its modeling and animation tools, the latest version of 3ds Max also features shaders (such as ambient occlusion and subsurface scattering), dynamic simulation, particle systems, radiosity, normal map creation and rendering, global illumination, a customizable user interface, and its own scripting language.

Page 17: 3D Article

http://en.wikipedia.org/wiki/Autodesk_3ds_Max

Autodesk Maya, commonly shortened to Maya, is 3D computer graphics software that runs on Microsoft Windows, Mac OS and Linux, originally developed by Alias Systems Corporation (formerly Alias|Wavefront) and currently owned and developed by Autodesk, Inc. It is used to create interactive 3D applications, including video games, animated film, TV series, or visual effects. The product is named after the Sanskrit word Maya (मा�या� māyā), the Hindu concept of illusion.

http://en.wikipedia.org/wiki/Autodesk_Maya

CINEMA 4D is a 3D modeling, animation and rendering application developed by MAXON Computer GmbH of Friedrichsdorf, Germany. It is capable of procedural and polygonal/subd modeling, animating, lighting, texturing, rendering, and common features found in 3d modelling applications. 

Page 18: 3D Article

http://en.wikipedia.org/wiki/Cinema_4D

SketchUp is a 3D modeling program optimized for a broad range of applications such as architectural, civil, mechanical, film as well as video game design — and available in free as well as 'professional' versions. The program highlights its ease of use, and an online repository of model assemblies (e.g., windows, doors, automobiles, entourage, etc.) known as 3D Warehouse enables designers to locate, download, use and contribute free models. The program includes a drawing layout functionality, allows surface rendering in variable "styles," accommodates third-party "plug-in" programs enabling other capabilities (e.g., near photo realistic rendering) and enables placement of its models within Google Earth. In early 2012, Google, the current owner of Sketchup, announced it will sell the program to Trimble, a company formerly known for GPS location services.

Page 19: 3D Article

http://en.wikipedia.org/wiki/SketchUp

 

Polygon Count and File SizeThe two common measurements of an object's 'cost’ or file size are the polygon count and vertex count. For example, a game character may stretch anywhere from 200-300 polygons, to 40,000+ polygons. A high-end third-person console or PC game may use many vertices or polygons per character, and an iOS tower defence game might use very few per character.

Polygons Vs. Triangles

When a game artist talks about the poly count of a model, they really mean the triangle count. Games almost always use triangles not polygons because most modern graphic hardware is built to accelerate the rendering of triangles.

The polygon count that's reported in a modelling app is always misleading, because a model's triangle count is higher. It's usually best therefore to switch the polygon counter to a triangle counter in your modelling app, so you're using the same counting method everyone else is using.

Polygons however do have a useful purpose in game development. A model made of mostly four-sided polygons (quads) will work well with edge-loop selection & transform methods that speed up modelling, make it easier to judge the "flow" of a model, and make it easier to weight a skinned model to its bones. Artists usually preserve these polygons in their models as long as possible. When a model is exported to a game engine, the polygons are all converted into triangles automatically. However different tools will create different triangle layouts within those polygons. A quad can end up either as a "ridge" or as a "valley" depending on how it's triangulated. Artists need to carefully examine a new model in the game engine to see if the triangle edges are turned the way they wish. If not, specific polygons can then be triangulated manually.

Triangle Count vs. Vertex Count

Vertex count is ultimately more important for performance and memory than the triangle count, but for historical reasons artists more commonly use triangle count as a performance measurement. On the most basic level, the triangle count and the vertex count can be similar if the all the triangles are connected to one another. 1 triangle uses 3 vertices, 2 triangles use 4 vertices, 3 triangles use 5 vertices, and 4 triangles use 6 vertices and so on. However, seams in UVs, changes to

Page 20: 3D Article

shading/smoothing groups, and material changes from triangle to triangle etc. are all treated as a physical break in the model's surface, when the model is rendered by the game. The vertices must be duplicated at these breaks, so the model can be sent in renderable chunks to the graphics card.

Overuse of smoothing groups, over-splittage of UVs, too many material assignments (and too much misalignment of these three properties), all of these lead to a much larger vertex count. This can stress the transform stages for the model, slowing performance. It can also increase the memory cost for the mesh because there are more vertices to send and store.

http://wiki.polycount.net/PolygonCount

Rendering Time

Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life. Several different, and often specialised, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. Rendering may take from fractions of a second to days for a single image/frame. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.

Real-time

Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. In real-time rendering, the goal is to show as much information as possible as the eye can process in a fraction of a second, i.e. one frame. The primary goal is to achieve an as high as possible degree of photorealism at an acceptable minimum rendering speed (usually 24 frames per second, as that is the minimum the human eye needs to see to successfully create the illusion of movement). In fact, exploitations can be applied in the way the eye 'perceives' the world, and as a result the final image presented is not necessarily that of the real-world, but one close enough for the human eye to tolerate. Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artefact of a camera. This is the basic method employed in games, interactive worlds and VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer's GPU.

Non Real-time

Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media

Page 21: 3D Article

such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.

When the goal is photo-realism, techniques such as ray tracing or radiosity are employed. This is the basic method employed in digital media and artistic works. Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).

The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system. The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.

Reflection/Scattering - How light interacts with the surface at a given point

Shading - How material properties vary across the surface

http://en.wikipedia.org/wiki/3D_rendering