For my specialisation I wanted to create portals in the engine we have been building over the year. I really like the idea of portals since they make for a cool feature and allow the creation of spaces unlike anything in the real world. I also like how something so close to rendering can have such a big impact on the gameplay. My biggest inspirations were Antichamber and Manifold Garden. In those games the portals aren’t all that obvious at first glance, but they are used to create something that can’t exist in the real world.
I set two extra goals for my portals. The first being
recursion, which I unfortunately didn’t complete in time. My other goal was two have gravity by affected by the portals. That means if one of the portals is turned by 90 degrees walking through it what was you left is now your down. However due to how portals work it will feel like you’re still falling down, it is just the rest of the world that has rotated. My goal was to create an MC Escher like scene but to do covid-19 this project did not develop alongside our first person game as I had intended. This meant that rearrange my planning and I did not get as far as I had hoped.
My first task was getting the view of any camera on to an object. At this stage of our engine it only had the main camera. To get this texture on to an object I simply created a portal component and whenever a renderer encountered an object with that component it simply grabbed the frame texture instead of its regular texture.
The next step was giving the portals their own camera and texture. The main camera was singular instance at the time, so I had to make it component and have the rendering system use the current camera. This meant I could give every portal a camera and have it offset a few steps behind it. Next was the texture. Due to how our ECS is constructed the components are const when accessing them from the system. This meant that if I had the cameras themselves hold the texture it would be a at least one frame of and I didn’t want that. Instead I added a map with textures to the system with the key being the component pointer. Now whenever a camera is constructed it tells the system to create a texture and when it is destructed it tells the system to remove said texture.
This was when I realised by having the portal camera a few steps back it would render its own view and not the view of the connected portal. It also blocked its own view. To combat this I added the ID of the other portal in the portal component. Now instead of calculate its camera transform in regard to it own transform it takes the transform of the other one and when culling it also removes that portal. Now I had static view int the portal.
To have the camera move I had to calculate the player camera transform relative to other portal it is paired to. This is accomplished with some matrix multiplication. First the matrix of the camera is multiplied with the inverse matrix of portal A. This gives us the main cameras transform in the local space portal A. The resulting matrix is then multiplied with the matrix of portal B. This takes the camera matrix out of local space. However since we used the matrix of portal B to the camera out of local space the resulting that is the camera delta to A was instead its delta to portal B.
Now when things was moving I noticed that somethings were moving differently also all of the portals showed the same view. Both of these problem turned out to be because of our deferred context rendering. We had it set up so each render had its own context to allow threading per renderer. This meant that each renderer got the texture at different stages of completion. It also resulted in only the last rendered camera texture being rendered. My fix for this was instead of having a context for each renderer to have a context for each camera. I also saw this a potential performance increase since each portal adds a lot to render, especially if I was to add recursion, and having it be rendered in parallel could really help. To enable I made the map in the render system hold a struct containing both the texture and the context since access to that was already set up. Now the first thing a render does is getting the context for the active camera.
Now when spawning my portals they were flickering a lot. After some debugging, I realised they were flickering because they were all rendering threaded but each render only had one set of buffers. I fixed this by giving each renderer a map for its buffers with the key being the context. No when encountering a context for the first time it creates the buffers.
By now the portal is showing all that the camera renders which is not what we want.
What we want is for it to only show the parts of the texture that we could possibly see through the portal. To do this instead of using the models UV for the texture we instead use its screen space coordinates as UV when rendering. Since vertexes and thereby the pixels screen space coordinates are already calculated during the vertex shader only one line its pixelshader is all it takes to fix.
Now with the portals looking good the next step is teleporting the player when stepping through them.
I gave each portal a collider. The entities that are to go through portals, in this case the player and the camera, were given a portal traveller component. The purpose of the traveller component is to keep track of the portals the entities are in contact with and har far they are from stepping through.
My first iteration of this was to create a system that check if traveller has stepped through the portal by taking the dot of the portals forward and the normalized delta to the portal. When the dot changed I know the traveller has stepped through. To teleport the transform was calculated using the same formula that was used to get the camera transform for the camera. This did teleport the player but there were most of the time some flickering when going through. Because of our engines physx implementation entities with physics components, like the player, have their transform controlled. This resulted in the physics and transform occasionally not aggreging on which frame the player travel through the portal. Because of this the current version of the traveller system only keep track of the portal the entity is contact with and when the physics component moves it checks those portals.
The next source of flickering was how our ECS stopped me from changing a component during run so one frame from behind the portal was rendered before teleporting. My first solution to this was to attempt to predict when the traveller was to step through. This worked better but I noticed that it worked better on one side of the portal. That side was further away from the portals origin and position that meant the other side got higher delta on the angle and could predict better. To negate this problem I skipped normalizing the delta since that gives the distance along the portals forward vector which gave better result.
There were still occasional flickering when walking through the portal since the portal has depth which means that sometimes when walking through the portal you see the inside of the arriving portal. The portal to needs to have depth since otherwise it will be clipped by the near plane when walking through it at some angle. I solved both these problems by calculate the scale and offset of the model when an entity with the portal component is rendered.
Even this works I don’t really like this solution since the renderer has to test every entity if it has portal component. If I were to redo this, I would instead move it to a vertex shader for the portal.
Now with the player teleporting it is time to have them alter gravity. This was accomplished by having the players physx component transform its up vector using the same matrix multiplication. So now that we that gravity changes a lot of game code, like the looking around, that assumes that up is (0,0,1,0) starts behaving oddly. To fix that all code that assumes up is changed to get up from the players transform.
And here is the finished result: