A Little Question on FPS

Rendering optimization for web application requires us to know how to speed up FPS.

A display can have 60 Hz, 120 Hz, 144 Hz, or other variance (known as refresh rate).

60 Hz means that it will refresh the image it displays 60 times a second. So we can put up 60 images each second (60 fps), more images won't be displayed. If we have less images, say 30 fps, then two refreshes will display the same frame.

The question I have been wondering about is, what if I'm just looking at a static web pages without any events triggered to it.

Is the GPU still producing 60 fps of the same exact image, or maybe it does not produce any image at all?

If we take a look at Wikipedia's definition on GPU:

A graphics processing unit (GPU), occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

I suppose the display refreshes it screens and show images from the frame buffer.

If no events triggered, then the GPU does not need to alter the frame buffer.

What is the frame buffer anyway?

Well, go back to Wikipedia and we will find out that it is:

A framebuffer (frame buffer, or sometimes framestore) is a portion of RAM containing a bitmap that is used to refresh a video display from a memory buffer containing a complete frame of data.

It all makes sense now.

Encoding in 250 words

Computer speak in 1 and 0 (ones and zeroes), commonly referred to as binaries.

To produce character like A, 3, or ø it needs to know how to map each of those letters to binaries. Same situation applies when computer want to interpret data from binaries.

It may interpret a byte of 01000001 as A or perhaps B, depends on who sets up the rule in the first place.

Encoding basically a set of rule on mapping letters to binaries, and vice versa.

Know that a sequence of binary can be converted into to number (base 10) or hexadecimal (base 16). So keep it mind that a byte of 01000001 is equivalent to 65, and equivalent to 0x41 (0x indicates the that it’s a hex).

ASCII

One of the earliest encoding known is ASCII/ANSI.

They map decimal values from 0 to 127 to Western alphabets and control codes (tab, escape, backspace, etc) .

Mapping numerical values from 0-127 only takes up 7 bit of space. It does not specify numerical values from 128 to 255. So basically it wastes a bit for every byte. And it does not define characters for the rest of the world.

Unicode

Unicode isn’t necessarily an encoding but it does provide an interesting idea about code points.

All characters in the whole world is mapped to a form of U+WXYZ, whereas W, X, Y, and Z, are all hexadecimal values, which are able to hold numerical values 0-65535.

Now, there are several encoding to map this code points to real binaries:

  • UTF-7

  • UTF-8

  • UTF-16

  • UTF-32

Further readings:

Related concepts: