molehill介绍翻译

发布时间 : 星期五 文章molehill介绍翻译更新完毕开始阅读

MoleHill 3D API介绍

Actionscript 3 Experiments and Flash Player love ;)

发掘更多关于MoleHill的信息 ----ThibaultImbert

几个月前,在洛杉矶我们介绍过MoleHill API在手机和桌面上运行的情况。更多的信息点击 “Molehill” page,访Adobe Labs。我要给你们更多关于molehill的详细信息,一些actionscript开发者角度的技术细节。

那么,我们开始吧。

什么是MoleHill?

“MoleHill”是一组GPU加速的3D API的研发代码,即将表现于Adobe Flash Player和Adobe AIR中的actionscript 3.0。这将为Adobe Flash Player提供高档的3D表现力.MoleHill在windows平台上依

赖于DrectX9,MacOS和Linux上依赖于openGL 1.3。在手机平台上,如android中,MoleHill将依赖于OpenGL ES2。技术上,MoleHill API是真正基于可编程性着色器的,而且表现出一些3D开发者在flash中期待已久的特征,例如可编程性顶点和片段着色器,使得支持通过GPU进行骨骼动画的节点修饰,以及原生的Z缓冲,模板颜色缓冲,立方体纹理等。

就表现力来说,现在的Adobe Flash Player 10.1可以以近30HZ的频率下渲染几十万个非Z缓冲三角形,但是通过新的3D API,开发者可以想象以后的Adobe Flash Player能够在60HZ的频率,高清全屏的环境下渲染几十万个Z缓冲三角形。将高端的3D体验传送到每一台接入到网络内的电脑和设备,MoleHill将使之变得可能。点击 this观看示例视频,以了解MoleHill平台。

MoleHill 原理

已经存在于Flash Player 10中的的Flash Player 2.5D API是不被推荐的,Molehill API将提供一个需要完

全的GPU加速以促进3D表现力的解决方案。当然,选用何种API取决于你的工程项目。 在Adobe Labs里,我们推荐一种 “Stage Video”的概念,作为10.2可用的beta版。

Stage Video依赖于同样的设计,解码时通过完全的GPU加速。通过这种新的渲染器模式,Adobe Flash Player不再将视频图像或3D缓冲区呈现在显示序列中,而是将其放入后台的GPU着色模块中。这允许Adobe Flash Player直接将显存中充足可用的资源显示在屏幕上。通过CPU中的显示序列,不需要更多的重读便可检索GPU中的图像并将其显示在屏幕上。

结果,由于后台的3D内容不再显示序列中,Context3D 和 Stage3D 都不是显示类。所以,记

住你不能像使用其他显示对象一样使用它们,不能对其应用旋转,混合模式,滤镜和很多其它的效果。下面这幅图说明了这个概念:

当然,正如你所知道的,2D内容可以完善地融入3D内容,但是相反是不可能的。不管怎样,我们将提供一种API,允许你将3D内容加入BitmapData 类。从一个ActionScript API的立

场,作为一个开发者,你主要会用到这两个类,Stage3D 和Context3D。你向Adobe Flash Player请求一个3D 对象,Context3D类会为你创造。那么现在你也许在想,如果显卡驱动不兼容会怎么样,我会默默地得到一块黑色的屏幕?

Flash Player会返还给你一个Context3D 对象,但是用的是内部的软件,所以你会体验到相同的Molehill特征和相同的API,但是运行在CPU上。为了达到目的,我们依赖于TransGaming公司被称为“SwiftShader”的高速CPU光栅器。一个令人振奋的消息是,即使运行在软件上(on software),SwiftShader要比现在Flash Player 10.1中可用的矢量光栅器快10倍,所以你可以期待更非凡的表现里运行在软件里(in software)的时候。

MoleHill API的美好在于你不用考虑内部在发生些什么。我是运行在DirectX, OpenGL 或者SwiftShader?我是否应该用不同的API当在MacOS和 Linux的openGL环境下,或者手机平台的OpenGL ES2环境下?不,对于一个开发者,任何事都是透明的,你只需要用一种API编程,Adobe Flash Player会在内部替你处理这些,并且在屏幕后面完成转换工作。

记住这点很重要,Molehill API不使用固定函数管道,而是一种可编程性函数管道,这意味着你可以利用顶点和片段着色器将任何图像显示在屏幕上。那么,你能够你的着色器以纯粹的底层AGAL (“Adobe Graphics Assembly Language”)字节作为字节数组上传到显卡。作为一个开发者,你有两种方式可以选择:在汇编级上编写你的着色器,这需要你对着色器的原理非常了解;或者使用高层的语言例如:Pixel Bender 3D,这将表现出一种更自然的方式去编写你的着色器及编译合适的AGAL字节码。 The way it works.

In order to represent your triangles, you will need to work

with VertexBuffer3D and IndexBuffer3D objects by passing vertices coordinates and indices, and once your vertex shaders and fragment shaders are ready, you can upload them to the graphics card through a Program3D object. Basically, a vertex shader deals with the position of the vertices used to draw your triangles whereas a fragment shader handles the appearance of the pixels used to texture your triangles. The following figure illustrates the difference between the types of shaders:

As stated before, Molehill does not use a fixed function pipeline, hence developers will be free to create their own custom shaders and totally control the rendering pipeline. So let’s focus a little bit on the concept of vertex and fragment shaders with Molehill. Digging into vertex and fragment shaders

To illustrate the idea, here is a simple example of low-level shading assembly you could write to display your triangles and work at the pixel level with Molehill. Let's get ready, cause we are going to go very low-level and code shaders to the metal

. Of course if you hate this, do not worry, you will be able to

use a higher-level shading language like Pixel Bender 3D.

Note : To compile the assembly String to AGAL bytecode, download the AGALMiniAssembler here.

To create our shader program to upload to the graphics card; we need first a vertex shader (Context3DProgramType.VERTEX) which should at least output a clip-space position coordinate. To perform this, we need to multiply va0 (each vertex position attributes) by vc0 (vertex constant 0) our projection matrix stored at this index and output the result through the op keyword (standing for \position\

1.// create a vertex program - from assembly

2.var vertexShaderAssembler : AGALMiniAssembler = new AGALMiniAssembler(); 3.

4.vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,

5.\ // 4x4 matrix transform from stream 0 (vertex position) to output clipspace 6.);

Now you may wonder, what is this m44 thing? Where does it comes from?

It is actually a 4x4 matrix transformation, it projects our vertices according to the projection matrix we defined, we could have written our shader like the following, by manually calculating the dot product on each attribute, but the m44 instruction (performing a 4x4 matrix transform on all attributes in one line) is way shorter:

01.// create a vertex program - from assembly

02.var vertexShaderAssembler : AGALMiniAssembler = new AGALMiniAssembler(); 03.

04.vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,

05.\op.x, va0, vc0 \\n\ + // 4x4 matrix transform from stream 0 (vertex position) to output clipspace

06.\ + 07.\ + 08.\ + 09.);

Remember that vc0 (vertex constant 0), it is actually just our projection matrix stored in this index, passed earlier as a constant through thesetProgramsConstantMatrix API on the Context3D object: 1.context3D.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelMatrix, true );

As with our matrix constant, va0 (vertex attributes 0) for the position, needs to be defined, and we did this through the setVertexBufferAt API on theContext3D object. 1.context3D.setVertexBufferAt

Context3DVertexBufferFormat.FLOAT_3 );

In our example, the vertex shader passes the vertices color (va1) to the fragment shader through v0 and the mov instruction to actually paint our triangles pixels. To do this, we could write the following: 1.// create a vertex program - from assembly

2.var vertexShaderAssembler : AGALMiniAssembler = new AGALMiniAssembler(); 3.

4.vertexShaderAssembler.assemble( Context3DProgramType.VERTEX,

5.\op, va0, vc0 \\n\ + // 4x4 matrix transform from stream 0 (vertex position)

(0,

vertexbuffer, 0,

联系合同范文客服:xxxxx#qq.com(#替换为@)