Shadow Mapping - OpenGL ES SDK for Android - GitHub Pages
文章推薦指數: 80 %
Shadow Mapping. Yellow cube represents the spot light source. The application displays two cubes on a plane which are lit by directional and spot lights. The ...
OpenGLESSDKforAndroid
ARMDeveloperCenter
Home
Pages
Namespaces
Files
All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Pages
DemonstrationofshadowmappingfunctionalityusingOpenGLES3.0.
Introduction
ThistutorialassumesthatyoualreadyhavebasicOpenGLESknowledge,andhavereadandunderstoodtheNormalMapping,LightingandTextureCubetutorials.
Overview
ShadowMapping.Yellowcuberepresentsthespotlightsource.
Theapplicationdisplaystwocubesonaplanewhicharelitbydirectionalandspotlights.Thelocationanddirectionofthespotlightsource(representedbyasmallyellowcubeflyingabovethescene)in3Dspaceareregularlyupdated.Thecubeandplanemodelsareshadowreceivers,butonlythecubesareshadowcasters.Theapplicationusesshadowmappingforrenderinganddisplayingshadows.
Rendergeometry
Intheapplicationwearerenderingahorizontallylocatedplane,ontopofwhichwelaytwocubes.Thereisalsoasinglecubeflyingabovethescenewhichrepresentsthespotlightsource.Letusnowfocusongeneratingthegeometrythatwillberendered.
Intheapplicationweareusingtwoprogramobjects:oneresponsibleforrenderingthescene,whichconsistsofaplaneandtwocubeswithallthelightingandshadowsapplied,andasecondone,usedforrenderingasinglecube(theyellowoneflyingabovethescene)thatrepresentsthespotlightsource.Wewillnowfocusonthefirstprogramobject,asrenderingthesinglecubeonascreenshouldbealreadyawell-knowtechniqueforthereader(orwillbeafterreadingthistutorial).
Vertexcoordinatesofthegeometrythatwillberendered.
Firstofall,weneedtohavethecoordinatesofverticesthatmakeupacubicorplaneshape.Pleasenotethattherewillalsobelightingapplied,whichmeansthatwewillneednormalsaswell.
Geometrydatawillbestoredandthenusedbyobjectsthataregeneratedwiththefollowingcommands:
/*Generatebufferobjects.*/
GL_CHECK(glGenBuffers(6,bufferObjectIds));
/*Storebufferobjectnamesinglobalvariables.
*Thevariableshavemorefriendlynames,sothatusingthemiseasier.*/
cubeCoordinatesBufferObjectId=bufferObjectIds[0];
lightRepresentationCoordinatesBufferObjectId=bufferObjectIds[1];
cubeNormalsBufferObjectId=bufferObjectIds[2];
planeCoordinatesBufferObjectId=bufferObjectIds[3];
planeNormalsBufferObjectId=bufferObjectIds[4];
uniformBlockDataBufferObjectId=bufferObjectIds[5];
/*Generatevertexarrayobjects.*/
GL_CHECK(glGenVertexArrays(3,vertexArrayObjectsNames));
/*Storevertexarrayobjectnamesinglobalvariables.
*Thevariableshavemorefriendlynames,sothatusingthemiseasier.*/
cubesVertexArrayObjectId=vertexArrayObjectsNames[0];
lightRepresentationCoordinatesVertexArrayObjectId=vertexArrayObjectsNames[1];
planeVertexArrayObjectId=vertexArrayObjectsNames[2];
Thereisoneextrabufferobjectgenerated,whichIDisstoredintheuniformBlockDataBufferObjectIdvariable.Thisoneisnotneededatthisstep,soyoucanignoreit.
Geometrydataisthengeneratedandcopiedtospecificbufferobjects.Formoredetailsonhowthecoordinatesofverticesarecalculated,pleaserefertotheimplementationofthosefunctions.
Generategeometrydata.
voidcreateDataForObjectsToBeDrawn()
{
/*Gettriangularrepresentationofthescenecube.StorethedatainthecubeCoordinatesarray.*/
CubeModel::getTriangleRepresentation(&cube.coordinates,
&cube.numberOfElementsInCoordinatesArray,
&cube.numberOfPoints,
cube.scalingFactor);
/*Calculatenormalvectorsforthescenecubecreatedabove.*/
CubeModel::getNormals(&cube.normals,
&cube.numberOfElementsInNormalsArray);
/*GettriangularrepresentationofasquaretodrawplaneinXZspace.StorethedataintheplaneCoordinatesarray.*/
PlaneModel::getTriangleRepresentation(&plane.coordinates,
&plane.numberOfElementsInCoordinatesArray,
&plane.numberOfPoints,
plane.scalingFactor);
/*Calculatenormalvectorsfortheplane.StorethedataintheplaneNormalsarray.*/
PlaneModel::getNormals(&plane.normals,
&plane.numberOfElementsInNormalsArray);
/*Gettriangularrepresentationofthelightcube.StorethedatainthelightRepresentationCoordinatesarray.*/
CubeModel::getTriangleRepresentation(&lightRepresentation.coordinates,
&lightRepresentation.numberOfElementsInCoordinatesArray,
&lightRepresentation.numberOfPoints,
lightRepresentation.scalingFactor);
ASSERT(cube.coordinates!=NULL,"Couldnotretrievecubecoordinates.");
ASSERT(cube.normals!=NULL,"Couldnotretrievecubenormals.");
ASSERT(lightRepresentation.coordinates!=NULL,"Couldnotretrievecubecoordinates.");
ASSERT(plane.coordinates!=NULL,"Couldnotretrieveplanecoordinates.");
ASSERT(plane.normals!=NULL,"Couldnotretrieveplanenormals.");
}
Fillbufferobjectswithdata.
/*Bufferholdingcoordinatesoftriangleswhichmakeupthescenecubes.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
cubeCoordinatesBufferObjectId));
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
cube.numberOfElementsInCoordinatesArray*sizeof(float),
cube.coordinates,
GL_STATIC_DRAW));
/*Bufferholdingcoordinatesofnormalvectorsforeachvertexofthescenecubes.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
cubeNormalsBufferObjectId));
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
cube.numberOfElementsInNormalsArray*sizeof(float),
cube.normals,
GL_STATIC_DRAW));
/*Bufferholdingcoordinatesoftriangleswhichmakeuptheplane.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
planeCoordinatesBufferObjectId));
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
plane.numberOfElementsInCoordinatesArray*sizeof(float),
plane.coordinates,
GL_STATIC_DRAW));
/*Bufferholdingcoordinatesoftheplane'snormalvectors.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
planeNormalsBufferObjectId));
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
plane.numberOfElementsInNormalsArray*sizeof(float),
plane.normals,
GL_STATIC_DRAW));
/*Bufferholdingcoordinatesofthelightcube.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,
lightRepresentationCoordinatesBufferObjectId));
GL_CHECK(glBufferData(GL_ARRAY_BUFFER,
lightRepresentation.numberOfElementsInCoordinatesArray*sizeof(float),
lightRepresentation.coordinates,
GL_STATIC_DRAW));
Intheprogramobject,geometryverticesarereferredtoviatheattributes,whichisratherobvious.
invec4attributePosition;/*Attribute:holdingcoordinatesoftrianglesthatmakeupageometry.*/
invec3attributeNormals;/*Attribute:holdingnormals.*/
Thisiswhyweneedtoqueryfortheattributelocationwithintheprogramobjectresponsibleforscenerendering(notethatallofthefollowingfunctionsneedtobecalledfortheactiveprogramobject).
cubesAndPlaneProgram.positionAttributeLocation=GL_CHECK(glGetAttribLocation(cubesAndPlaneProgram.programId,"attributePosition"));/*Attributethatisfedwiththeverticesoftrianglesthatmakeupgeometry(cubeorplane).*/
cubesAndPlaneProgram.normalsAttributeLocation=GL_CHECK(glGetAttribLocation(cubesAndPlaneProgram.programId,"attributeNormals"));/*Attributethatisfedwiththenormalvectorsforgeometry(cubeorplane).*/
Asyoucanseeabove,wearequeryingforthelocationsofcoordinatesonly,withoutspecifyingthecubeorplaneones.Thisisbecauseweareusingonlyoneprogramobjecttorenderboththeplaneandthecubes.RenderingspecificgeometryisachievedbyusingproperVertexAttribArrays.Let'slookathowitisimplemented.
GL_CHECK(glBindVertexArray(cubesVertexArrayObjectId));
/*Setvaluesforcubes'normalvectors.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,cubeNormalsBufferObjectId));
GL_CHECK(glEnableVertexAttribArray(cubesAndPlaneProgram.normalsAttributeLocation));
GL_CHECK(glVertexAttribPointer(cubesAndPlaneProgram.normalsAttributeLocation,3,GL_FLOAT,GL_FALSE,0,0));
/*Setvaluesforthecubes'coordinates.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,cubeCoordinatesBufferObjectId));
GL_CHECK(glEnableVertexAttribArray(cubesAndPlaneProgram.positionAttributeLocation));
GL_CHECK(glVertexAttribPointer(cubesAndPlaneProgram.positionAttributeLocation,3,GL_FLOAT,GL_FALSE,0,0));
GL_CHECK(glBindVertexArray(planeVertexArrayObjectId));
/*Setvaluesforplane'snormalvectors.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,planeNormalsBufferObjectId));
GL_CHECK(glEnableVertexAttribArray(cubesAndPlaneProgram.normalsAttributeLocation));
GL_CHECK(glVertexAttribPointer(cubesAndPlaneProgram.normalsAttributeLocation,3,GL_FLOAT,GL_FALSE,0,0));
/*Setvaluesforplane'scoordinates.*/
GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER,planeCoordinatesBufferObjectId));
GL_CHECK(glEnableVertexAttribArray(cubesAndPlaneProgram.positionAttributeLocation));
GL_CHECK(glVertexAttribPointer(cubesAndPlaneProgram.positionAttributeLocation,3,GL_FLOAT,GL_FALSE,0,0));
Now,bycallingglBindVertexArray()withtheproperparameter,wecancontrolwhichobject(cubesorplane)isgoingtoberendered.Pleasereferto:
GL_CHECK(glBindVertexArray(cubesVertexArrayObjectId));
GL_CHECK(glBindVertexArray(planeVertexArrayObjectId));
Thefinalthingistomaketheactualdrawcall,whichcanbeachievedby:
GL_CHECK(glDrawArrays(GL_TRIANGLES,0,plane.numberOfPoints));
Wewantedtodrawtwocubesthatarelaidonaplane.ThisiswhyweusetheglDrawArraysInstanced()callratherthanglDrawArrays().Thankstothattherewillbeexactly2instancesofthesameobjectdrawnonascreen.
GL_CHECK(glDrawArraysInstanced(GL_TRIANGLES,0,cube.numberOfPoints,2));
Calculateashadowmap
Tocalculatetheshadowmapweneedtocreateadepthtexture,whichwillbeusedtostoretheresults.Itisachievedinsomebasicsteps,whichyoushouldalreadyknow,butletusdescribethisonemoretime.
GeneratetextureobjectandbindittotheGL_TEXTURE_2Dtarget.
GL_CHECK(glGenTextures(1,
&shadowMap.textureName));
GL_CHECK(glBindTexture(GL_TEXTURE_2D,
shadowMap.textureName));
Specifythetexturestoragedatatype.
GL_CHECK(glTexStorage2D(GL_TEXTURE_2D,
1,
GL_DEPTH_COMPONENT24,
shadowMap.width,
shadowMap.height));
Wewantedourshadowtobemoreprecise,thisiswhythedepthtextureresolutionisbiggerthannormalscenesize.Pleasereferto:
window.height=height;
window.width=width;
shadowMap.height=window.height*2;
shadowMap.width=window.width*2;
Settextureobjectparameters.ThenewthinghereistosetGL_TEXTURE_COMPARE_MODEtothevalueofGL_COMPARE_REF_TO_TEXTUREwhichleadstortexturecoordinatetobecomparedtothevalueinthecurrentlybounddepthtexture.
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MIN_FILTER,
GL_NEAREST));
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_MAG_FILTER,
GL_NEAREST));
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_WRAP_S,
GL_CLAMP_TO_EDGE));
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_WRAP_T,
GL_CLAMP_TO_EDGE));
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_COMPARE_FUNC,
GL_LEQUAL));
GL_CHECK(glTexParameteri(GL_TEXTURE_2D,
GL_TEXTURE_COMPARE_MODE,
GL_COMPARE_REF_TO_TEXTURE));
Thenextthingwehavetodotoimplementtherendertotexturemechanismisto:
Generateframebufferobject.
GL_CHECK(glGenFramebuffers(1,
&shadowMap.framebufferObjectName));
GL_CHECK(glBindFramebuffer(GL_FRAMEBUFFER,
shadowMap.framebufferObjectName));
Bindthedepthtextureobjecttothedepthattachmentoftheframebufferobject.
GL_CHECK(glFramebufferTexture2D(GL_FRAMEBUFFER,
GL_DEPTH_ATTACHMENT,
GL_TEXTURE_2D,
shadowMap.textureName,
0));
Wewantedthespotlightsourcepositiontobeupdatedpereachframe.Thisiswhytheshadowmapwillneedtobeupdatedaswell,astheperspectivefromwhichaspotlightis"lookinginto"thesceneisdifferentforeachframe.
lightProjectionMatrix=Matrix::matrixPerspective(degreesToRadians(90.0f),
1.0f,
1.0f,
50.0f);
light.position.x=radius*sinf(time/2.0f);
light.position.y=2.0f;
light.position.z=radius*cosf(time/2.0f);
/*Directionoflight.*/
light.direction.x=lookAtPoint.x-light.position.x;
light.direction.y=lookAtPoint.y-light.position.y;
light.direction.z=lookAtPoint.z-light.position.z;
/*Normalizethelightdirectionvector.*/
light.direction.normalize();
GL_CHECK(glUniform3fv(cubesAndPlaneProgram.lightDirectionLocation,1,(float*)&light.direction));
GL_CHECK(glUniform3fv(cubesAndPlaneProgram.lightPositionLocation,1,(float*)&light.position));
Intheshader,weareusingauniform:abooleanflagindicating,whethertheplaneorcubesarebeingrendered.Thankstothat,therewillbeadifferentpositionused,whicharespecificforeachgeometry.
if(shouldRenderPlane)
{
modelPosition=planePosition;
}
else
{
modelPosition=vec3(cubesPosition[gl_InstanceID].x,cubesPosition[gl_InstanceID].y,cubesPosition[gl_InstanceID].z);
}
Getuniformlocation
cubesAndPlaneProgram.shouldRenderPlaneLocation=GL_CHECK(glGetUniformLocation(cubesAndPlaneProgram.programId,"shouldRenderPlane"));/*Uniformholdingabooleanvalueindicatingwhichgeometryisbeingdrawn:cubeorplane.*/
Setuniformvalue.False,ifcubesarerendered.
GL_CHECK(glUniform1i(cubesAndPlaneProgram.shouldRenderPlaneLocation,
false));
True,ifaplaneisrendered.
GL_CHECK(glUniform1i(cubesAndPlaneProgram.shouldRenderPlaneLocation,
true));
Owingtothefactthattheshadowmaptextureisbiggerthanthenormalscene(asalreadymentionedabove),wehavetoremembertoadjusttheviewport.
GL_CHECK(glViewport(0,0,shadowMap.width,shadowMap.height));
Oursceneisrathersimple:therearetwocubesplacedonthetopofaplane.Wecanintroducesomeoptimisationhere,whichmeansthebackfaceswillbeculled.Wearealsosettingthepolygonoffsettoeliminatez-fightingintheshadows.Thosesettingsareusedonlyifenabled.
/*SetthePolygonoffset,usedwhenrenderingtheintotheshadowmaptoeliminatez-fightingintheshadows.*/
GL_CHECK(glPolygonOffset(1.0,0.0));
GL_CHECK(glCullFace(GL_BACK));
GL_CHECK(glEnable(GL_POLYGON_OFFSET_FILL));
Whatweneedtodoistoenabledepthtesting.Whenthisisenabled,thedepthvalueswillbecomparedandtheresultwillbestoredinthedepthbuffer.
/*Enabledepthtesttodocomparisonofdepthvalues.*/
GL_CHECK(glEnable(GL_DEPTH_TEST));
Inthisstep,wewanttogeneratethedepthvaluesonly,whichmeansweareallowedtodisablewritingtoeachframebuffercolourcomponent.
/*Disablewritingofeachframebuffercolorcomponent.*/
GL_CHECK(glColorMask(GL_FALSE,GL_FALSE,GL_FALSE,GL_FALSE));
Finallywearereadyfortheactualrendering.
draw(false);
Ifwewouldliketousethegenerateddepthtexturedatainaprogramobject,itisenoughtoqueryforashadowsampleruniformlocationandsetthedepthtextureobjectasinputvalueforthisuniform.
cubesAndPlaneProgram.shadowMapLocation=GL_CHECK(glGetUniformLocation(cubesAndPlaneProgram.programId,"shadowMap"));
/*Setactivetexture.Shadowmaptexturewillbepassedtoshader.*/
GL_CHECK(glActiveTexture(GL_TEXTURE0));
GL_CHECK(glBindTexture(GL_TEXTURE_2D,shadowMap.textureName));
GL_CHECK(glUniform1i(cubesAndPlaneProgram.shadowMapLocation,0));
ThosearebasicallyallthestepsweneedtoproceedintheAPI.Themainmechanismoftheshadowmappingtechniqueishandledbytheprogramobject.Pleaselookattheshadersshownbelow.
Vertexshadercode
/*Numberofcubestobedrawn.*/
#definenumberOfCubes2
/*[Defineattributes]*/
invec4attributePosition;/*Attribute:holdingcoordinatesoftrianglesthatmakeupageometry.*/
invec3attributeNormals;/*Attribute:holdingnormals.*/
/*[Defineattributes]*/
uniformmat4cameraProjectionMatrix;/*Projectionmatrixfromcamerapointofview.*/
uniformmat4lightProjectionMatrix;/*Projectionmatrixfromlightpointofview.*/
uniformmat4lightViewMatrix;/*Viewmatrixfromlightpointofview.*/
uniformvec3cameraPosition;/*Camerapositionwhichweusetocalculateviewmatrixforfinalpass.*/
uniformvec3lightPosition;/*Vectorofpositionofspotlightsource.*/
uniformboolisCameraPointOfView;/*Iftrue:performcalculationsfromcamerapointofview,else:fromlightpointofview.*/
uniformboolshouldRenderPlane;/*Iftrue:drawplane,else:drawcubes.*/
uniformvec3planePosition;/*Positionofplaneusedtocalculatetranslationmatrixforaplane.*/
/*Uniformblockholdingdatausedforrenderingcubes(positionofcubes)-usedtocalculatetranslationmatrixforeachcubeinworldspace.*/
uniformcubesDataUniformBlock
{
vec4cubesPosition[numberOfCubes];
};
outvec4outputLightPosition;/*Outputvariable:vectorofpositionofspotlightsourcetranslatedintoeye-space.*/
outvec3outputNormal;/*Outputvariable:normalvectorforthecoordinates.*/
outvec4outputPosition;/*Outputvariable:vertexcoordinatesexpressedineyespace.*/
outmat4outputViewToTextureMatrix;/*Outputvariable:matrixwewilluseinthefragmentshadertosampletheshadowmapforgivenfragment.*/
voidmain()
{
/*Viewmatrixcalculatedfromcamerapointofview.*/
mat4cameraViewMatrix;
/*Matricesandvectorsusedforcalculatingoutputvariables.*/
vec3modelPosition;
mat4modelViewMatrix;
mat4modelViewProjectionMatrix;
/*Modelconsistsofplaneandcubes(eachofthemhasdifferentcolourandposition).*/
/*[Usedifferentpositionforaspecificgeometry]*/
if(shouldRenderPlane)
{
modelPosition=planePosition;
}
else
{
modelPosition=vec3(cubesPosition[gl_InstanceID].x,cubesPosition[gl_InstanceID].y,cubesPosition[gl_InstanceID].z);
}
/*[Usedifferentpositionforaspecificgeometry]*/
/*Createtransformationmatrix(translationofamodel).*/
mat4translationMatrix=mat4(1.0,0.0,0.0,0.0,
0.0,1.0,0.0,0.0,
0.0,0.0,1.0,0.0,
modelPosition.x,modelPosition.y,modelPosition.z,1.0);
/*Computematricesforcamerapointofview.*/
if(isCameraPointOfView==true)
{
cameraViewMatrix=mat4(1.0,0.0,0.0,0.0,
0.0,1.0,0.0,0.0,
0.0,0.0,1.0,0.0,
-cameraPosition.x,-cameraPosition.y,-cameraPosition.z,1.0);
/*Computemodel-viewmatrix.*/
modelViewMatrix=cameraViewMatrix*translationMatrix;
/*Computemodel-view-perspectivematrix.*/
modelViewProjectionMatrix=cameraProjectionMatrix*modelViewMatrix;
}
/*Computematricesforlightpointofview.*/
else
{
/*Computemodel-viewmatrix.*/
modelViewMatrix=lightViewMatrix*translationMatrix;
/*Computemodel-view-perspectivematrix.*/
modelViewProjectionMatrix=lightProjectionMatrix*modelViewMatrix;
}
/*[Definebiasmatrix]*/
/*Biasmatrixusedtomapvaluesfromarange(eyespacecoordinates)to<0,1>(texturecoordinates).*/
constmat4biasMatrix=mat4(0.5,0.0,0.0,0.0,
0.0,0.5,0.0,0.0,
0.0,0.0,0.5,0.0,
0.5,0.5,0.5,1.0);
/*[Definebiasmatrix]*/
/*Calculatenormalmatrix.*/
mat3normalMatrix=transpose(inverse(mat3x3(modelViewMatrix)));
/*Calculateandsetoutputvectors.*/
outputLightPosition=modelViewMatrix*vec4(lightPosition,1.0);
outputNormal=normalMatrix*attributeNormals;
outputPosition=modelViewMatrix*attributePosition;
if(isCameraPointOfView)
{
/*[Calculatematrixthatwillbeusedtoconvertcameratoeyespace]*/
outputViewToTextureMatrix=biasMatrix*lightProjectionMatrix*lightViewMatrix*inverse(cameraViewMatrix);
/*[Calculatematrixthatwillbeusedtoconvertcameratoeyespace]*/
}
/*Multiplymodel-spacecoordinatesbymodel-view-projectionmatrixtobringthemintoeye-space.*/
gl_Position=modelViewProjectionMatrix*attributePosition;
}
Weuseoneprogramobjecttorenderthecubesandplanefromthecameraandlightpointofview.Thevertexshaderjustusesdifferentinputdatatorenderthespecificgeometryanddifferentmatricesareusedfortranslatingtheverticesintoaspecificspace.Thereishoweveroneimportantstepwhichhasnotbeenmentionedbefore.
Ifwearerenderingageometryfromthespotlight’spointofview(togetdepthvalueswhicharethenstoredintheshadowMaptexture),thenweneedtosamplethetexturetogetthedepthvalueofaspecificfragment,butthistimethecamera’spointofviewistakenintoaccount.Wehavetosomehowconvertonespaceintoanother.AndthisiswhywearecalculatingtheoutputViewToTextureMatrixmatrix.
Abiasmatrixhelpsuswithconvertingcoordinatesfromeyespace(fromarange)tovaluesfromtexturecoordinatesrange:<0,1>.
/*Biasmatrixusedtomapvaluesfromarange(eyespacecoordinates)to<0,1>(texturecoordinates).*/
constmat4biasMatrix=mat4(0.5,0.0,0.0,0.0,
0.0,0.5,0.0,0.0,
0.0,0.0,0.5,0.0,
0.5,0.5,0.5,1.0);
outputViewToTextureMatrix=biasMatrix*lightProjectionMatrix*lightViewMatrix*inverse(cameraViewMatrix);
Thewholeideaisrepresentedwiththeschemashownbelow.
ConvertingcameraeyespacetospotlightNDCspaceschema.
Whenwegetthisvalue,wearereadytoissuethefragmentshaderoperations.Thereisdirectionallightingimplemented,whichshouldbeclearforareader.Therearealsospotlightcalculationsissued.
Fragmentshadercode
precisionhighpfloat;
precisionhighpsampler2DShadow;
invec4outputLightPosition;/*Vectorofthespotlightpositiontranslatedintoeye-space.*/
invec3outputNormal;/*Normalvectorforthecoordinates.*/
invec4outputPosition;/*Vertexcoordinatesexpressedineyespace.*/
inmat4outputViewToTextureMatrix;/*Matrixwewilluseinthefragmentshadertosampletheshadowmapforgivenfragment.*/
uniformvec4colorOfGeometry;/*Colourofthegeometry.*/
uniformvec3lightDirection;/*Normalizeddirectionvectorforthespotlight.*/
uniformsampler2DShadowshadowMap;/*Samplerofthedepthtextureusedforshadow-mapping.*/
outvec4color;/*Outputcolourvariable.*/
#definePI3.14159265358979323846
/*Structureholdingpropertiesofthedirectionallight.*/
structDirectionalLight
{
floatambient;/*Valueofambientintensityfordirectionallightingofascene.*/
vec3color;/*Colourofthedirectionallight.*/
vec3direction;/*Directionforthedirectionallight.*/
};
/*Structureholdingpropertiesofspotlight.*/
structSpotLight
{
floatambient;/*Valueofambientintensityforspotlighting.*/
floatangle;/*Anglebetweenspotlightdirectionandconeface.*/
floatspotExponent;/*Valueindicatingintensitydistributionoflight.*/
floatconstantAttenuation;/*Valueoflight'sattenuation.*/
floatlinearAttenuation;/*Valueoflinearlight'sattenuation.*/
floatquadraticAttenuation;/*Valueofquadraticlight'sattenuation.*/
vec3direction;/*Vectorofdirectionofspotlight.*/
vec4position;/*Coordinatesofpositionofspotlightsource.*/
};
voidmain()
{
DirectionalLightdirectionalLight;
directionalLight.ambient=0.01;
directionalLight.color=vec3(1.0,1.0,1.0);
directionalLight.direction=vec3(0.2,-1.0,-0.2);
SpotLightspotLight;
spotLight.ambient=0.1;
spotLight.angle=30.0;
spotLight.spotExponent=2.0;
spotLight.constantAttenuation=1.0;
spotLight.linearAttenuation=0.1;
spotLight.quadraticAttenuation=0.9;
spotLight.direction=lightDirection;
spotLight.position=outputLightPosition;
/*Computedistancebetweenthelightpositionandthefragmentposition.*/
floatxDistanceFromLightToVertex=(spotLight.position.x-outputPosition.x);
floatyDistanceFromLightToVertex=(spotLight.position.y-outputPosition.y);
floatzDistanceFromLightToVertex=(spotLight.position.z-outputPosition.z);
floatdistanceFromLightToVertex=sqrt((xDistanceFromLightToVertex*xDistanceFromLightToVertex)+
(yDistanceFromLightToVertex*yDistanceFromLightToVertex)+
(zDistanceFromLightToVertex*zDistanceFromLightToVertex));
/*Directionallight.*/
/*Calculatethevalueofdiffuseintensity.*/
floatdiffuseIntensity=max(0.0,-dot(outputNormal,normalize(directionalLight.direction)));
/*Calculatecolourfordirectionallighting.*/
color=colorOfGeometry*vec4(directionalLight.color*(directionalLight.ambient+diffuseIntensity),1.0);
/*Spotlight.*/
/*Computethedotproductbetweennormalandlightdirection.*/
floatnormalDotLight=max(dot(normalize(outputNormal),normalize(-spotLight.direction)),0.0);
/*Shadow.*/
/*Positionofthevertextranslatedtotexturespace.*/
vec4vertexPositionInTexture=outputViewToTextureMatrix*outputPosition;
/*Normalizedpositionofthevertextranslatedtotexturespace.*/
vec4normalizedVertexPositionInTexture=vec4(vertexPositionInTexture.x/vertexPositionInTexture.w,
vertexPositionInTexture.y/vertexPositionInTexture.w,
vertexPositionInTexture.z/vertexPositionInTexture.w,
1.0);
/*Depthvalueretrievedfromtheshadowmap.*/
floatshadowMapDepth=textureProj(shadowMap,normalizedVertexPositionInTexture);
/*Depthvalueretrievedfromdrawnmodel.*/
floatmodelDepth=normalizedVertexPositionInTexture.z;
/*Calculatevectorfrompositionoflighttopositionoffragment.*/
vec3vectorFromLightToFragment=vec3(outputPosition.x-spotLight.position.x,
outputPosition.y-spotLight.position.y,
outputPosition.z-spotLight.position.z);
/*CalculatecosinevalueofanglebetweenvectorFromLightToFragmentandvectorofspotlightdirection.*/
floatcosinusAlpha=dot(spotLight.direction,vectorFromLightToFragment)/
(sqrt(dot(spotLight.direction,spotLight.direction))*
sqrt(dot(vectorFromLightToFragment,vectorFromLightToFragment)));
/*Calculateangleforcosinevalue.*/
floatalpha=acos(cosinusAlpha);
/*
*Checkangles.IfalphaislessthanspotLight.anglethenthefragmentisinsidelightcone.
*Otherwisethefragmentisoutsidethecone-itisnotlitbyspotlight.
*/
constfloatshadowMapBias=0.00001;
if(alpha
延伸文章資訊
- 1Tutorial 16 : Shadow mapping
As such, rendering the shadow map is done with an orthographic projection matrix. An orthographic...
- 2OpenGL - 阴影映射- Tutorial 16 : Shadow mapping - CSDN博客
文章目录Basic shadowmap - shadow map的基础知识Rendering the shadow map - 渲染shadow mapSetting up the render...
- 3Shadow Mapping OpenGL shadow not always drawing, and ...
Is there an easy way to get shadows in OpenGL? - Stack ...
- 4Tutorial 23 - Shadow Mapping - Part 1 - OGLdev
The results of the 3D pipeline in OpenGL end up in something which is called a 'framebuffer objec...
- 5Shadow Mapping - LearnOpenGL
comparrison of shadows in a scene with and without in OpenGL ... in fragment shader of shadowmap ...