A Little Question on FPS

Rendering optimization for web application requires us to know how to speed up FPS.

A display can have 60 Hz, 120 Hz, 144 Hz, or other variance (known as refresh rate).

60 Hz means that it will refresh the image it displays 60 times a second. So we can put up 60 images each second (60 fps), more images won't be displayed. If we have less images, say 30 fps, then two refreshes will display the same frame.

The question I have been wondering about is, what if I'm just looking at a static web pages without any events triggered to it.

Is the GPU still producing 60 fps of the same exact image, or maybe it does not produce any image at all?

If we take a look at Wikipedia's definition on GPU:

A graphics processing unit (GPU), occasionally called visual processing unit (VPU), is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.

I suppose the display refreshes it screens and show images from the frame buffer.

If no events triggered, then the GPU does not need to alter the frame buffer.

What is the frame buffer anyway?

Well, go back to Wikipedia and we will find out that it is:

A framebuffer (frame buffer, or sometimes framestore) is a portion of RAM containing a bitmap that is used to refresh a video display from a memory buffer containing a complete frame of data.

It all makes sense now.

Another Browser Rendering Nitty Gritty

I did a post before on how browser rendering works previously though, but I feel it was not enough.

There are certainly gaps in my head I wanted to fill, and I came across this conference talk which I find really informative. Not the best presentation but it's decent.

Check your self right here:

There are several key points I'd like to make about this video if you prefer not to watch it.

  • When HTML parser reaches <script> tag, it will halt parsing, fetch the script and execute it before continuing to parse.

  • When a <script> tag is found and parsing is halted, the browser will create a new thread in a new process with the browser to search for external images and CSS to fetch in parallel. It probably will also look for another <script> tag to download, but I'm not sure about executing it.

  • On initial render, it needs all the Parse Tree to finished before it proceeds to making a DOM, and it needs all DOM nodes and CSSOM to be completed before being combined into a Render Tree, and so on.

  • JavaScript can interfere with the DOM and CSSOM on initial render.

  • For subsequent rendering, it will set at a regular interval when it will reflow and repaint. Every time we mutate the DOM, it will still be immediately alter the Render Tree, but then it will not immediately proceed to the next stage (layout). Altered nodes in Render Tree will be marked dirty, the batch will traverse the tree and find all dirty trees at a regular interval, so multiple dirty nodes can be reflow and repaint on a single flow.

  • Immediate reflow occur on several actions, such as doing a font size change, resizing the browser, and accessing several properties like node.offsetHeight!

Some Performance Insights
  • Do all reading in one go, writing in one go.

JavaScript Module Pattern

Modules are needed in JavaScript to achieve modularity, which is to keep pieces of code that is unrelated to be independent from one another in a loosely coupled way (changes in one affects little to no effects to another).

These sort of module patterns are popular when module bundler like Webpack is still not a thing.

Modularity is considered by many to be well-structured code.

It's the idea of separation of concern. Revolves around the principle of high cohesion and low coupling.

Module provides encapsulation and avoid namespace collision. It acts as a replacement of class.

Encapsulation is something that JavaScript natively doesn't have. To counter this, we use something called closure. It provides private variable/method.

To sum that:

  • provide encapsulation

  • support modularity

  • avoid namespace collision

Module design pattern uses something referred to as IIFE (immediately invoked function expression), it's simply a function that wraps inside of a parentheses.

let iife = (function () {  
  ...
})();

or,

let iife = (function () {  
  ...
}());

The parentheses tells JavaScript engine that what's inside has to be an expression, therefore it knows when it encounters a function, the function has to be a function expression, not a function declaration.

There is also an important feature JavaScript has that you need to know, it is known as implied globals.

On a non global execution context, let's say something like this:

function foo () {  
  undeclaredVar = 3;
}
foo();  
console.log(undeclaredVar); // outputs 3  

Although undeclaredVar has not been declared prior, it still works!

JavaScript engine will check whether undeclaredVar is available in the current execution context, if not, then it will check it's parent execution context and see if it's there. If it's fruitless, then JavaScript engine will treats it as a global variable (freshly created with the corresponding assignment as the value).

Bear in mind that implied globals will not work in the global execution itself on strict mode!

a = 3; // throws error  
Revealing Module Pattern

This is the basic module pattern you'll use and often see to create a module. You probably saw it in another types of pattern (not just module pattern) such as factory.

let awesomeModule = (function () {  
  // private variable goes here
  return {
    // public variable goes here
  };
})()
Privilege Member

Privilege members are used to access private members indirectly. It's like a setter and getter. An advantage is to be able to filter and protect private members for being accidentally altered with an unexpected value or solely errors.

let awesomeModule = (function () {  
  let numFoods = 10;
  return {
    setFood(numFood) {
      if (typeof numFood !== 'number') {
        throw Error('setFood require number parameter');
      }
    }
  };
})()
Augmentation

This is a module pattern used to add additional properties to our current module. It is divided into two distinct variance:

  • Loose augmentation

  • Strict augmentation

Tight augmentation requires module to load synchronously (in order), while strict augmentation does not.

Tight Augmentation:
let module = (function (module) {  
  module.newProp = 'bar';
  module.newMethod = () => {};
  return module;
})(module)

So it is no longer creating but instead adding.

Notice that is passes module, therefore module needs to already been initialized (otherwise it will be undefined).

I don't really know the importance of returning and reassigning the module on this pattern. I guess it's purpose is generally consistency.

Loose Augmentation:
let module = (function (module) {  
  module.newProp = 'bar';
  module.newMethod = () => {};
  return module;
})(module || {})

Loose augmentation does not requires you to provide an existing module.

Now in this case, it is important to return the module and reassign it, just in case that the module does not exist in the first place. Otherwise, no augmenting will ever takes place.

As for the expression module || {} it will returns {} when module is falsy. You can think {} as a default.

The only downside is incapability to overwrite methods.

Sub-Module

Nothing special here, creating a sub-module is as simple as creating a module:

module.subModule = (() => {  
  return {};
})()

Essential Unit Testing and TDD

I have used unit testing within TDD (Test Driven Development) for quite sometime, yet I did not really understand why we are doing it.

Everybody tells me it's the best practice, so I did it without hesitation.

I thought I was doing TDD although I wrote my test last, after the code was written.

After making my own research towards unit testing and test driven development, I felt enlighten.

Unit Testing

Unit tests usually test on the scope of a class, module, or a component. Basically it test the smallest applicable unit on your software codebase.

Unit testing is not about about finding bugs, it is not merely a test at all!

The goal of making a unit tests is to help you design your software component.

You have to know in mind what your code are supposed to do before even start jabbing around on coding. Without it, you can't even start unit testing. It is often where we go implement a bunch of code without the right design in mind, we're just trying to figure it out along the way, therefore waste a great deal of time.

When you find your code difficult to test, it's a sign that you got yourself a smelly code:

  • Too much dependencies therefore you need to mock each one of those which is painful. It tells you that your code is too tightly coupled.

  • Unit test is way too long because your unit does multiple of things (low cohesive).

  • Common functionality is found among multiple different units violating DRY (low cohesive).

Unit testing also enables one to document cases in which a unit has been tested with. It also forces you think and probably find edge cases where a bug was found.

It can also be a regression test because simply changing your codebase has a potential to break your test, therefore having you to refactor the test and probably caught bug along the way.

Unit Testing Within TDD

Unit testing matches well with test driven development where forces you to write the simplest code that is necessary for the feature to work (test pass).

By doing this, you are avoiding unnecessary code, otherwise known as YAGNI (you ain't gonna need it).

A typical workflow would be a "red, green, refactor".

You write a unit test for the feature you are about to implement. Run the test and get a failed test (red).

Then you make the simplest code to make the test pass (green).

Afterward, refactor what's necessary.

Make it work, then make it right.

Selective Unit Testing and Techniques

Not all unit are meant to be unit tested. Some yields great benefit while some are just not worth the cost.

The cost of making unit testing are time needed to make the unit test and to maintain it.

If the ratio between cost and benefit is too high, it is better to avoid it.

Steven Anderson said that the benefit of unit testing is correlated with the non-obviousness of the code under the test.

In other words: abstraction level of the code.

If the code likely to have high abstraction level, a further design assistance generally needed.

If you can't figure out what the code was doing on a single glance, a further verification through unit testing is beneficial because it would be daunting to check all possible cases manually.

He also argue that the cost is correlated with the number of dependencies your code has, where more dependencies means more time needed to mock and adapt to future changes of your dependencies.

To sum it all up:

  • Complex code with few dependencies: cheap and highly beneficial to unit test.

  • Complex code with many dependencies: you will need need to refactor the code to two parts: a part which handles the complex logic and another which glues (interacts) with many dependencies. Unit test only the one that contains the complex code.

  • Trivial code with many dependencies: not worth to unit test.

  • Trivial code with few dependencies: not worth to unit test.

As for the complex code with many dependencies, the technique is to pull out the dependencies out and form an entirely new unit.

Use that new unit to interact with all of the dependencies needed and pass only what is needed within the dependencies to the unit that requires it.

What it means that when a unit expresses that it uses dateClass.getTime(), it does not actually need the dateClass, it needs only the time. So just pass getTime() from the unit that glues all the dependencies to the unit that handles the complex logic.

Clean Function

I have learnt for a couple of days prior about how to write a clean function from Robert C. Martin or otherwise known as Uncle Bob.

The whole idea in my opinion always embarks on how expressive it needs to be. If your code is readable and easy to understand, then you have written a clean code.

Sometimes we as a programmer wants to show our co-workers and the readers of our code how smart we are. We tend to provide fancy ways of doing stuff. Making everything as one-liner as it can. If they can't comprehend what we have written, well, they are just plain dumb!

Uncle Bob said that this is a work of an amateur. Professional work the other way around.

Professional want their code to be expressive, understandable, and readable. It needs to be as concise and simple as much as possible!

Why? Because we work as a team, we want to ship fast avoid bugs! If you are the only person responsible for that part of code you written (because nobody understand it except you), then what happen if you leave?

Got the idea?

Let's dive in.

Naming

Naming function is critical to your program.

It should be a verb and it needs to clearly explain what is the intention of the function you are writing, so fellow programmers does not need to dive inside your function to see what it does, a glance from the name itself should be enough!

Do not present hidden implementation (side effects) that goes beyond the function name. Don't say that your function does A, but turns out it also does B.

Small

The first rule he aligns is that function has to be small! Functions needs to be two, three, or four lines long.

Do one thing

Function has to have one responsibility only! It needs to focus on one specific thing and abstract other implementation into a separate function.

Extract big function into smaller chunks, but do not over extract function if a restatement is the only effect it yields.

Sometimes it's hard to do it all from the beginning. Rule of thumb is, write all your implementation function first, then when you are finished, refactor it.

First make it work, then make it right!

Big function that polished into small abstracted steps seems to do more than one thing. It might look as follows:

function foo() {  
  doA();
  doB();
  doC();
}

This little abstractions made the function look like as if it does a lot.

So does it does one thing or three things?

One level of abstraction per function

Well, Uncle Bob state that if the statements within our functions are all at the same level of abstraction, they are definitely do "one thing".

What does it mean abstraction, anyway?

Well, each function you extract from the big function is an abstraction, because you are hiding it's implementation detail by wrapping it to a new function.

Making use of the function does not require you to know how it does it, only what it does (from the name).

So, what is level of abstraction?

appendToArray() and parseHTML() is two different level of abstraction. appendToArray() has simpler implementation therefore lower abstraction level. You certainly know how it does the way it does. Meanwhile parseHTML() is much more complex, more magic is going on, you are not entirely sure how it parse HTML.

Making sure your function has the same level of abstraction is key. One way to look level of abstraction is through LOC (lines of code) it has.

Another way is that a function and all it's extracted method should read well like a paragraph.

Here's the snippet from Clean Code book by Uncle Bob.

To include the setups and teardown, we include setups, then we include the test page content, and then we include the teardown.

To include the setups, we include the suite setup if this is a suite, then we include the regular setup.

To include the suite setup, we search the parent hierarchy for the “SuiteSetUp” page and add an include statement with the path of that page.

To search the parent. . .

Arguments

The best number of argument for a function is zero. More arguments means more confusion. You need to remember what to pass, the order of what to pass, and more testing.

Less is better.

Except, if the system you are trying to create has a natural order and requirements. For instance is a coordinate number setCoordinate(x, y).

Exceptions

Exceptions are beneficial since it's able to simplify error handling.

But handling error itself is a specific one thing of what a function does.

Uncle Bob wants you to extract your try/catch block.

function foo() {  
  try {
    doBar();
  } catch (err) {
    logError(err);
  }
}