Props | State | |
Owner | parent component | component itself |
Accessibility | public, i.e. inside and outside the component | private, only available inside the component |
Changeable by parents | yes | no |
Changeable by component itself | no | yes |
Change mechanism | parent changes prop | component calls this.setState(newState) |
Change triggers re-rendering of component | yes | yes |
yet another developer blog
Wednesday, October 12, 2016
React State and Props compared
Not much blabla today, just a table that compares the differences between React's props and state.
Thursday, April 14, 2016
Why I prefer functional over classical and prototypal inheritance
There are multiple ways in which inheritance can be implemented in JavaScript. In fact so many that the choice easiely gets difficult. I am going to quickly introduce them with a code example and then explain why my recommodation is using functional inheritance in the vast majority of cases.
Classical inheritance
The intention of this pattern (sometimes referred also as pseudo classical inheritance) is to hide away the prototypal nature of JavaScript and look as if JavaScript knew about the concept of classes so that developers coming from languages like Java or C# will find the syntax quite familiar.
Look at this simplified example:
There are quite a bunch of gotchas with this pattern and it's variations (details see for example in the book 'JavaScript Patterns', chapter 'Code reuse patterns'. Another general problem is that classical inheritance hides away the "real", i.e. prototypal nature of JavaScript. Due to these reasons I personally would try to avoid the usage.
Prototypal inheritance
This pattern is based on the
One common gotcha (that also is valid for classical inheritance) comes into play when you have complex objects in the parent:
The reason for the last line's result lies in the way how JavaScript is getting and setting properties.
When getting a property, JavaScript will traverse up the entire prototype chain looking for it and returning the first occurence.
Setting values is different: Javascript will always set a property in the most derived object - but only if it's not a complex object.
What happens here with
Functional inheritance
In his book 'JavaScript: The Good Parts' Douglas Crockford advocates for an approach that he calls 'functional inheritance'. Let's look at an example (btw the usage of ES5 arrow functions and template strings and
What are the advantages of this approach?
There are unfortunately also some bad news:
But still, in my opinion the advantages of functional inheritance outweight the disadavantages, especially because of the fact that in the vast majority of the real life use cases the mentioned drawbacks are not a really problematic.
Finally a word of caution: do not exaggerate the usage of inheritance, instead adhere to one of the most important OO principles and prefer composition over inheritance!
Classical inheritance
The intention of this pattern (sometimes referred also as pseudo classical inheritance) is to hide away the prototypal nature of JavaScript and look as if JavaScript knew about the concept of classes so that developers coming from languages like Java or C# will find the syntax quite familiar.
Look at this simplified example:
There are quite a bunch of gotchas with this pattern and it's variations (details see for example in the book 'JavaScript Patterns', chapter 'Code reuse patterns'. Another general problem is that classical inheritance hides away the "real", i.e. prototypal nature of JavaScript. Due to these reasons I personally would try to avoid the usage.
Prototypal inheritance
This pattern is based on the
Object.create
method which was introduced with ES5 and is a more "natural" choice than classical inheritance.
One common gotcha (that also is valid for classical inheritance) comes into play when you have complex objects in the parent:
The reason for the last line's result lies in the way how JavaScript is getting and setting properties.
When getting a property, JavaScript will traverse up the entire prototype chain looking for it and returning the first occurence.
Setting values is different: Javascript will always set a property in the most derived object - but only if it's not a complex object.
What happens here with
schwolf.parts.legs = 1
is that - according to the getting rule - the parts object from the base class is returned. Because parts is a complex object it set's it's legs property to 1. And why does this affect all instances? This is the big differnence between prototypal inheritance and Java/C#: prototypes are 'referenced' whereas Java/C# base classes get part of the object and don't share anything with other instances (except for stuff that is explicitely marked as 'static').Functional inheritance
In his book 'JavaScript: The Good Parts' Douglas Crockford advocates for an approach that he calls 'functional inheritance'. Let's look at an example (btw the usage of ES5 arrow functions and template strings and
Object.assign
is just for keeping the example concise):What are the advantages of this approach?
- easy to grasp - note that we only use functions and no (relatively complicated) prototypes or constructors
- no gotchas
- encapsulation (private members), see nameLength variable in person function
- high performance after object creation
There are unfortunately also some bad news:
- low performance when creating many objects
- less dynamic than prototypal inheritance (prototype augmention not possible at any time like with prototypal inheritance)
- instanceof not possible
But still, in my opinion the advantages of functional inheritance outweight the disadavantages, especially because of the fact that in the vast majority of the real life use cases the mentioned drawbacks are not a really problematic.
Finally a word of caution: do not exaggerate the usage of inheritance, instead adhere to one of the most important OO principles and prefer composition over inheritance!
Friday, April 1, 2016
Is a JavaScript function which takes a callback as a parameter automatically async?
As JavaScript programmers we know that with ES6 promises, the notion of asyncronous programming has been built into JavaScript natively. Using promises guarantees that code is performed asynchronously.
But have you been aware that before ES6, the language itself did not have support for sync/async programming?
"That's not true!" you will probably say, "I have been using setTimeout and Ajax long before ES6! And you tell me that they are not async ??".
Well, the mentioned examples are clearly examples for async functionality, but: it is not functionality which is coming from the JavaScript language! Instead, it is functionality that comes from the environment in which JavaScript is executed (browser, nodejs...).
Indeed language-built-in constructs were not available.
So, how does setTimeout etc. this work?
On a low level this works because hardware interrupts signal events to the operating system which in turn then passes them to the JavaScript engine.
Takeaway: asyncronity is only achieved by using the environment's async functionality - or by leveraging promises functionality, either natively via ES6 promises or using promise libraries like Q.
Now let's go back to the original question from the title of this blog post: is every higher level function (a function that takes another function as an argument) async?
Having the background above in mind it should be obvious that the answer is no - because not all of such functions use async functionality from the environment.
Need examples? Check out Array.sort(compareFx) or String.replace(stringToReplace, replaceFx)...
And how can you find out if a function that you consume is async or not? There are only two possibilites: from the documentation or by digging into the code.
PS: Promises are not the end of the evolution of async programming in JavaScript. There is currently a proposal at stage 3 which deals with the introduction of async / await keywords in the language (btw: this is very similar to how the C# syntax looks).
But have you been aware that before ES6, the language itself did not have support for sync/async programming?
"That's not true!" you will probably say, "I have been using setTimeout and Ajax long before ES6! And you tell me that they are not async ??".
Well, the mentioned examples are clearly examples for async functionality, but: it is not functionality which is coming from the JavaScript language! Instead, it is functionality that comes from the environment in which JavaScript is executed (browser, nodejs...).
Indeed language-built-in constructs were not available.
So, how does setTimeout etc. this work?
On a low level this works because hardware interrupts signal events to the operating system which in turn then passes them to the JavaScript engine.
Takeaway: asyncronity is only achieved by using the environment's async functionality - or by leveraging promises functionality, either natively via ES6 promises or using promise libraries like Q.
Now let's go back to the original question from the title of this blog post: is every higher level function (a function that takes another function as an argument) async?
Having the background above in mind it should be obvious that the answer is no - because not all of such functions use async functionality from the environment.
Need examples? Check out Array.sort(compareFx) or String.replace(stringToReplace, replaceFx)...
And how can you find out if a function that you consume is async or not? There are only two possibilites: from the documentation or by digging into the code.
PS: Promises are not the end of the evolution of async programming in JavaScript. There is currently a proposal at stage 3 which deals with the introduction of async / await keywords in the language (btw: this is very similar to how the C# syntax looks).
Monday, September 21, 2015
Configuring Synology NAS for my home network
After having plugged my DS215j into my home network some fine tuning had to be done. This following list is my personal protocol in case I have to do this once again - perhaps it is be helpful for other Synology users as well.
1. DSM 5.2 more or less forces the user to put multimedia files into auto-generated folders (video/photo/music). However on my clients I prefer one share "multimedia" over mapping three network drives. I solved this by setting up a shared folder "multimedia" containing 3 symbolic links video, photo and music that point to the corresponding auto-generated folders (here is how to create permanent symbolic links on the Synology - note that you have to have some knowledge how vi works)
2. User accounts for my wife, the children and me are to be set up. Note that interaction between Synology and Windows / Mac without a directory server works best if credentials are kept synchronous both on clients and the NAS.
3. The three folders mentioned above had to be given appropriate permissions: read-write for admin group and my own user, my wife and the children are only allowed to read ;-)
4. On each of the clients, I mapped 2 network drives: the multimedia share and a personal home.
5. Configured standby behavior of the HDDs:
6. Set up Cloud Station for my working directories both on Synology and my clients. I tried out cloud sync with dropbox but moved away from that again because of privacy concerns ;-) and the fact that this is interfering with the standby times of the HDDs.
7. Configuration of automatic shutdown at night:
8. Due to security reasons I changed SSH to use a different port than default port 22.
9. I assigned the machine a static IP in my router configuration.
10. Because I want to have a public accessible node server I configured DDNS (at first I tried to do this within my router, but changes of the public IP were not notified the the DDNS provider):
11. To start the node server on automatically I added "node [path to js starting http server]" to rc.local
12. Port forwarding configured in router:
12. Note that Synology's multimedia apps Audio, Video and PhotoStation are only necessary when you want to access your multimedia through the browser - which is not the case for me, so I did not activate them.
1. DSM 5.2 more or less forces the user to put multimedia files into auto-generated folders (video/photo/music). However on my clients I prefer one share "multimedia" over mapping three network drives. I solved this by setting up a shared folder "multimedia" containing 3 symbolic links video, photo and music that point to the corresponding auto-generated folders (here is how to create permanent symbolic links on the Synology - note that you have to have some knowledge how vi works)
2. User accounts for my wife, the children and me are to be set up. Note that interaction between Synology and Windows / Mac without a directory server works best if credentials are kept synchronous both on clients and the NAS.
3. The three folders mentioned above had to be given appropriate permissions: read-write for admin group and my own user, my wife and the children are only allowed to read ;-)
4. On each of the clients, I mapped 2 network drives: the multimedia share and a personal home.
5. Configured standby behavior of the HDDs:
6. Set up Cloud Station for my working directories both on Synology and my clients. I tried out cloud sync with dropbox but moved away from that again because of privacy concerns ;-) and the fact that this is interfering with the standby times of the HDDs.
7. Configuration of automatic shutdown at night:
8. Due to security reasons I changed SSH to use a different port than default port 22.
9. I assigned the machine a static IP in my router configuration.
10. Because I want to have a public accessible node server I configured DDNS (at first I tried to do this within my router, but changes of the public IP were not notified the the DDNS provider):
11. To start the node server on automatically I added "node [path to js starting http server]" to rc.local
12. Port forwarding configured in router:
12. Note that Synology's multimedia apps Audio, Video and PhotoStation are only necessary when you want to access your multimedia through the browser - which is not the case for me, so I did not activate them.
Tuesday, August 25, 2015
Node / V8 versions and ES6 features on my Synology
I've set up a Synology NAS in my home network last week. One of the reasons was that I want to run a node server, and Synology offers it out of the box with DSM 5.2.
Because I want to check out / make use of ES6 features I had to find out which versions of Node and V8 are coming with the package. This is how I did it on the SSH shell:
Fortunately there is a discussion on StackOverflow which answers which features node has enabled by default and which can optionally be enabled using the --harmony flag.
Because I want to check out / make use of ES6 features I had to find out which versions of Node and V8 are coming with the package. This is how I did it on the SSH shell:
Fortunately there is a discussion on StackOverflow which answers which features node has enabled by default and which can optionally be enabled using the --harmony flag.
Sunday, May 3, 2015
Key takeaways from working with async/await in C#
There is a lot of information available around the web about asynchronous programming with async/await in C#. In this post I am going to explain my personal key takeaways.
In general, the pattern is all about avoiding to have threads in idle state, doing nothing but waiting for some time consuming operation to finish (you also have better things to do than waiting for the pizza in front of your door after you ordered it, right?).
Why is this needed? There are two scenarios: In user interfaces, the "free" UI thread can be used to react on user actions while the asynchronous operation takes place. User actions can be things like moving the window around, resizing, clicking buttons and so on. On the server side (e.g. in an ASP.NET web application), the "free" threads can be used to process other requests, i.e. the application's scalability will be improved.
For further discussion let's have a look at the following example code from MSDN:
In general, the pattern is all about avoiding to have threads in idle state, doing nothing but waiting for some time consuming operation to finish (you also have better things to do than waiting for the pizza in front of your door after you ordered it, right?).
Why is this needed? There are two scenarios: In user interfaces, the "free" UI thread can be used to react on user actions while the asynchronous operation takes place. User actions can be things like moving the window around, resizing, clicking buttons and so on. On the server side (e.g. in an ASP.NET web application), the "free" threads can be used to process other requests, i.e. the application's scalability will be improved.
For further discussion let's have a look at the following example code from MSDN:
1: async Task<int> AccessTheWebAsync()
2: {
3: HttpClient client = new HttpClient();
4: Task<string> getStringTask = client.GetStringAsync("http://msdn.microsoft.com");
5: DoIndependentWork();
6: string urlContents = await getStringTask;
7: return urlContents.Length;
8: }
- Async and await are appearing together: async is part of the method signature, await is used at least once within the method (Lines 1 and 6)
- An async method returns Task<T>, Task or void (in rare cases) (Line 1)
- Tasks can be considered as "containers" of work that has to be done.
- There is a convention that says async method names should be postfixed with "...Async" (Line 1)
- When an async method is called, the work within this method is kicked off but usually not finished, hence the return value is not the final result (string in this case) but the task which encapsulates the work (Line 4).
- You can do other stuff while the async task is being performed (Line 5)
- When our code hit's await it either continues processing when the result is already available, or - when the result is not yet available - it will for now pass control back to the caller. But await implies a promise that once the result is available the program will jump back continuing processing the result (Line 6).
- The naming of "await" is a bit misleading. Think of it as "continue after task has been finished", i.e. in the above example: "continue after getStringTask has been finished" (Line 6)
- For an async method, internally a state machine is created by the compiler that does all the heavy lifting.
- Note that the datatype represents is a string and not a task any more, i.e. await automatically unwraps the generic type parameter from the Task (Line 6).
- async/await are often going up the complete call stack, i.e. all the methods in the stack are async
An interesting source of information you might want to check out when working with asynchrony in C# is the article "Best Practices in Asynchronous Programming" from MSDN magazin.
Monday, March 16, 2015
Automating client side testing with Karma
Google says "The main goal for Karma is to bring a productive testing
environment to developers."
I recently found a nice answer on stackoverflow explaining how Karma is doing this.
This inspired me to create a visualization on this topic, here it is:
This is how it works:
1. The Karma HTTP server is launched. It serves a test framework specific html file (test.html) which references both application specific javascripts (app.js) and the tests (specs.js)
2. A browser is being launched which requests test.html
3. Karma HTTP server serves test.html and the referenced javascript files.
4. The tests are run within the client.
5. The results are submitted to the Karma HTTP server.
6. Based on the configured loggers, the results are being rendered to the console, a textfile or other output formats. The output can be checked manually or processed automatically.
Karma is highly configurable, the most important configurations are:
I recently found a nice answer on stackoverflow explaining how Karma is doing this.
This inspired me to create a visualization on this topic, here it is:
This is how it works:
1. The Karma HTTP server is launched. It serves a test framework specific html file (test.html) which references both application specific javascripts (app.js) and the tests (specs.js)
2. A browser is being launched which requests test.html
3. Karma HTTP server serves test.html and the referenced javascript files.
4. The tests are run within the client.
5. The results are submitted to the Karma HTTP server.
6. Based on the configured loggers, the results are being rendered to the console, a textfile or other output formats. The output can be checked manually or processed automatically.
Karma is highly configurable, the most important configurations are:
- The list of files to load in the browser (app.js)
- The testing framework which is used (e.g. jasmine) has to be configured and appropiate plugins have to be loaded - note that this also determines the html page (test.html) because it is framework specific
- The list of browsers (can be a headless one like phantom.js, too) to launch and run tests in
- The list of log appenders (console, textfile...) to be used
Note that 2., 3. and 4. support a plugin mechanism, i.e. Karma will load appropriate plugins. Such plugins are available for the most common scenarios, for more exotic scenarios you could write them on your own.
Subscribe to:
Posts (Atom)