This describes how graphics is created on the client side. That involves position recalculation from world to screen coordinates, calculating shifts towards pivot points, and more.
The task of displaying content on player's screen boils down to
convert world coordinate to the screen coordinate with respect to
In order to be able to explain this topic, let's start with some definitions:
Is position that is maintained on bot server side and client side (and server sends it to client). It respects the cartesian plane coordinates, and server sends these positions only (when objects are moving), for all objects in game world. Each game object has this type of position, and there is no exception what so ever!
Referred to as VAI, this is in depth explained in position modulation), however, brief idea is, that VAI is a real world position, but modulated to a virtual grid coordinates
Aka client position is position of mouse with respect to the screen.
By retrieving mouse position, you get client position The retrieved position is in following format:
\(pos(x,y) = [X, Y]\)
\(X \in [0, screenW]\)
\(Y \in [0, screenH]\)
which means, that \([0,0]\) is the origin of the screen, and is the upper left corner of screen (or HTML element)
Is simply position, at which we draw target texture (or element)
If the position would be \([0,0]\), the texture would be drawn in upper-left corner of the screen / canvas.
The position, however, can be outside of range of screen, meaning that the texture will be overflowing outside the user's screen
Is object, which is telling at what subarea of world is player looking at.
Mathematically, it is point in game world, which will be always displayed at centre of user's screen.
The position of camera is always in real world position.
According to that point, the real world position is converted to screen position and vice versa
Server sends to client positions in real world position,
or alternativelly in VAI (explained in
In case where VAI is sent, client uses function to compute the real world position.
The task is, given
camera position, object's
real world position,
get object's draw position,
such as the object will be displayed correctly
with respect to the camera.
However awful this might sound like, the implementation is even more awful, trust me.
Having some object of some size \(S=(x, y)\),
if we want to display it at centre of screen, the same
approach as above is used, however, the half of size must be subtracted / added to final draw position
in order to display centre of object on centre of screen.
Suppose that final draw point was calculated, and we have: \(I=(i1, i2)\)
Then, in order to display object at centre of screen with respect to size \(S\):
\(I=(i1 + S_x, i2 - S_y)\)
Here, \(y\) coordinate is subtracted, and it is because draw position is in fourth quadrant of cartesian plane only
Allthough performance and optimalization of client side is not first priority, there
are some things that just must be optimized in order to guarantee smooth run.
Before an entity is drawn, check is done on client side, whether the entity rectangle (or some desired aproximation shape) would collide the camera viewport rectangle at all.
If so, then entity is drawn. Else, the draw function is not called, saving a lot of performance
This contains notes about blender that I used frequently, and
this serves as a nice lookup.
|Connect 2 vertices with edge (or faces)||F|
|Parent object to another object||CTRL + P|
|Merge vertices||ALT + M|
|Change pivot point of object||CTRL + ALT + SHIFT + C|
|Clear parent||ALT + P|
|Cursor to selected||SHIFT + S|
|Toggle quad view||CTRL + ALT + Q|
|Add bone constraint with target (in pose mode)||CTRL + SHIFT + C|
|Scale to zero||S + (X/Y/Z) + 0|
|Copy mirrored bone pose||CTRL + SHIFT + V|
This section focuses more on how were exact elements drawn.
Walls are drawn using tile approach, same as floortiles.
The goal is to make wall see-through, when some character is behind it, preventing
players to hide behind walls.
This is solved purely on client side.
Every single time when character's position is changed (as a result of the message being received from server),
the system checks, whether the VAI index would change. If yes, then the map is notified, and all wall segments that are relevant
are notified as well.
Like this, the wall pieces are drawn at reduced opacity
It can be a heavy computation task. However, it is done on the client side, so it is not such a big deal
Blend modes are discussed here