Monday, September 21, 2015

Configuring Synology NAS for my home network

After having plugged my DS215j into my home network some fine tuning had to be done. This following list is my personal protocol in case I have to do this once again - perhaps it is be helpful for other Synology users as well.

1. DSM 5.2 more or less forces the user to put multimedia files into auto-generated folders (video/photo/music). However on my clients I prefer one share "multimedia" over mapping three network drives. I solved this by setting up a shared folder "multimedia" containing 3 symbolic links video, photo and music that point to the corresponding auto-generated folders (here is how to create permanent symbolic links on the Synology - note that you have to have some knowledge how vi works)

2. User accounts for my wife, the children and me are to be set up. Note that interaction between Synology and Windows / Mac without a directory server works best if credentials are kept synchronous both on clients and the NAS.

3. The three folders mentioned above had to be given appropriate permissions: read-write for admin group and my own user, my wife and the children are only allowed to read ;-)

4. On each of the clients, I mapped 2 network drives: the multimedia share and a personal home.

5. Configured standby behavior of the HDDs:

6. Set up Cloud Station for my working directories both on Synology and my clients. I tried out cloud sync with dropbox but moved away from that again because of privacy concerns ;-) and the fact that this is interfering with the standby times of the HDDs.

7. Configuration of automatic shutdown at night:

8. Due to security reasons I changed SSH to use a different port than default port 22.

9. I assigned the machine a static IP in my router configuration.

10. Because I want to have a public accessible node server I configured DDNS (at first I tried to do this within my router, but changes of the public IP were not notified the the DDNS provider):

11. To start the node server on automatically I added "node [path to js starting http server]" to rc.local

12. Port forwarding configured in router:

12. Note that Synology's multimedia apps Audio, Video and PhotoStation are only necessary when you want to access your multimedia through the browser - which is not the case for me, so I did not activate them.

Tuesday, August 25, 2015

Node / V8 versions and ES6 features on my Synology

I've set up a Synology NAS in my home network last week. One of the reasons was that I want to run a node server, and Synology offers it out of the box with DSM 5.2.
Because I want to check out / make use of ES6 features I had to find out which versions of Node and V8 are coming with the package. This is how I did it on the SSH shell:

Fortunately there is a discussion on StackOverflow which answers which features node has enabled by default and which can optionally be enabled using the --harmony flag.

Sunday, May 3, 2015

Key takeaways from working with async/await in C#

There is a lot of information available around the web about asynchronous programming with async/await in C#. In this post I am going to explain my personal key takeaways.
In general, the pattern is all about avoiding to have threads in idle state, doing nothing but waiting for some time consuming operation to finish (you also have better things to do than waiting for the pizza in front of your door after you ordered it, right?).
Why is this needed? There are two scenarios: In user interfaces, the "free" UI thread can be used to react on user actions while the asynchronous operation takes place. User actions can be things like moving the window around, resizing, clicking buttons and so on. On the server side (e.g. in an ASP.NET web application), the "free" threads can be used to process other requests, i.e. the application's scalability will be improved.

For further discussion let's have a look at the following example code from MSDN:

1:  async Task<int> AccessTheWebAsync()  
2:  {   
3:    HttpClient client = new HttpClient();  
4:    Task<string> getStringTask = client.GetStringAsync("");  
5:    DoIndependentWork();  
6:    string urlContents = await getStringTask;  
7:    return urlContents.Length;  
8:  }  
  • Async and await are appearing together: async is part of the method signature, await is used at least once within the method (Lines 1 and 6)
  • An async method returns Task<T>, Task or void (in rare cases) (Line 1)
  • Tasks can be considered as "containers" of work that has to be done.
  • There is a convention that says async method names should be postfixed with "...Async" (Line 1)
  • When an async method is called, the work within this method is kicked off but usually not finished, hence the return value is not the final result (string in this case) but the task which encapsulates the work (Line 4).
  • You can do other stuff while the async task is being performed (Line 5)
  • When our code hit's await it either continues processing when the result is already available, or - when the result is not yet available - it will for now pass control back to the caller. But await implies a promise that once the result is available the program will jump back continuing processing the result (Line 6).
  • The naming of "await" is a bit misleading. Think of it as "continue after task has been finished", i.e. in the above example: "continue after getStringTask has been finished" (Line 6)
  • For an async method, internally a state machine is created by the compiler that does all the heavy lifting.
  • Note that the datatype represents is a string and not a task any more, i.e. await automatically unwraps the generic type parameter from the Task (Line 6).
  • async/await are often going up the complete call stack, i.e. all the methods in the stack are async
An interesting source of information you might want to check out when working with asynchrony in C# is the article "Best Practices in Asynchronous Programming" from MSDN magazin.


Monday, March 16, 2015

Automating client side testing with Karma

Google says "The main goal for Karma is to bring a productive testing environment to developers."
I recently found a nice answer on stackoverflow explaining how Karma is doing this.

This inspired me to create a visualization on this topic, here it is:

This is how it works:

1. The Karma HTTP server is launched. It serves a test framework specific html file (test.html) which references both application specific javascripts (app.js) and the tests (specs.js)

2. A browser is being launched which requests test.html

3. Karma HTTP server serves test.html and the referenced javascript files.

4. The tests are run within the client.

5. The results are submitted to the Karma HTTP server.

6. Based on the configured loggers, the results are being rendered to the console, a textfile or other output formats. The output can be checked manually or processed automatically.

Karma is highly configurable, the most important configurations are:

  1. The list of files to load in the browser (app.js)
  2. The testing framework which is used (e.g. jasmine) has to be configured and appropiate plugins have to be loaded - note that this also determines the html page (test.html) because it is framework specific
  3. The list of browsers (can be a headless one like phantom.js, too) to launch and run tests in
  4. The list of log appenders (console, textfile...) to be used
Note that 2., 3. and 4. support a plugin mechanism, i.e. Karma will load appropriate plugins. Such plugins are available for the most common scenarios, for more exotic scenarios you could write them on your own.

Monday, February 16, 2015

How TFS and git can play together

Playing around with Git for some days now, I see the following benefits over centralized systems (ordered by importance):
  • quick context switches between branches and quick setup of new branches
  • faster, due to fewer network traffic
  • using state-of-the-art technology, e.g. consistent style of working with open source community
  • no need to be online
  • inherent decentralized backup of repositories
  • 2-way commit (i.e. staging area) gives more fine-grained control about what to check in
TFS and git are playing together out-of-the-box since Version 2013 of Visual Studio / Team Foundation Server (let's call this TFS-git), but the problem in projects that are using TFS as a central repository (Team Foundation Version Control ("TFVC")) is that making a hard switch to TFS-git at a certain point of time will most probably be too risky in enterprise scenarios. It makes more sense to start small and first gain some experienc with DVCS to see if it fits your needs - how can this be achieved?
The solution is a hybrid way: installing git-tfs (a two way bridge between git and tfs, the platform neutral alternative is Microsofts git-tf) makes it possible to work with git locally while the remote repository stays TFVC, i.e. every developer on the project is free to decide to choose between working with the bridge or using TFS "traditionally".

To be complete, let me mention a 3rd option: there may be situations where the only need for TFS functionality is the usage of TFS build processes (i.e. no work item management and other TFS functionality). For these scenarios, it is possible to use Visual Studio and git without TFVC, hosting the "central" repository e.g. at github.

The following table contains the support of the features work item management, build process usage and gated checkin in relation to the mentioned possibilities how TFS and git can play together:

work item mgmt. build process usage gated checkin
TFS-git yes yes no
git-tfs (locally git, remote TFVC) yes yes no
git with any non-TFVC (e.g. github) no yes no