Monday, November 18, 2013

Repositories vs. DAOs

Dealing with DDD for some time now, I asked myself the following question:

What is the difference between a Repository and a DAO (Data Access Object) ?

The reason for the question was that in many repositories (real world projects and tutorials on the web) I could not see much difference to good old DAOs.

Let's say you we have a functionality which is grabbing customers out of your database - seems to be modern to call stuff like this "CustomerRepository" - I personally doubt that this is what DDD is aiming at.
I started searching on the web but it was not so easy to find a precise answers but finally I stumbled over a very good blog post with illustrative examples. You definitely should check this out. I just would like to abstract it by opposing different aspects of repositories and DAOs:



Repository DAO
"business interface" speaking ubiquitous domain language "technical interface" contracting between data source and OO application
Close to domain Close to data source
typically one per aggregate root typically one per database table (or web service operation)
containing one or multiple DAOs used by a repository
interfaces in domain layer interfaces in infrastructure layer
parameters of interface methods are domain types parameters of interface methods are reflecting the data source
implementations in infrastructure layer (lots of technical plumbing) implementations in infrastructure layer (purely technical)

Friday, October 11, 2013

The seven falsy values in JavaScript

JavaScript is different from other languages regarding the evaluation if a value is true or false.
The easiest way is to remember the 7 values that are evaluated to "false".
Here they are (some of them are obvious, some are a bit surprising):
  • undefined
  • null
  • false
  • +0
  • -0
  • NaN
  • "" (empty string)
Everything else (e.g. objects, arrays, numbers, strings, regular Expressions ...) evaluates to "true".

This behavior can be used in different ways, e.g. shortening the check if a jQuery selector returns elements or not:

if ($('#foo').length > 0) {
    ...
}

can also be checked as follows (because jQuery returns an array of length 0 which is evaluated to "false"):

if ($('#foo').length) {
    ...
}

Note that you can produce subtle bugs if you are not aware of the evaluation rules.
Example:

function printLineItem(article, price) {
    if (article && price) {
        console.log("Article: " + article + "Price: " + price);
    }
}

printLineItem("Amiga 500", 999);
printLineItem("5 bitcoins voucher for next purchase", 0);

Bad luck for the customer, he won't get the voucher :-(

Friday, July 19, 2013

JavaScript type checks

JavaScript type checks have slight complexity. This and the reason that your code should be consistent through your project(s) make it necessary that type checks are part of your javascript programming guidelines.
I am using the recommodations from the jQuery project.

They say:

String
 typeof object === "string"  
Number
 typeof object === "number"  
Boolean
 typeof object === "boolean"  
Object
 typeof object === "object"  
Plain Object
 jQuery.isPlainObject(object)  
Function
 jQuery.isFunction(object)  
Array
 jQuery.isArray(object)  
Element
 object.nodeType  
null
 object === null
null or undefined
 object == null
undefined
     Global Variables
 typeof variable === "undefined"
     Local Variables
 variable === undefined
     Properties
 object.prop === undefined

Thursday, March 7, 2013

The evolution of delegates and anonymous methods

A delegate is a type that safely encapsulates a method, see MSDN for a deep-dive.
I want to concentrate on showing how delegates evolved during the versions of the .NET Framework.

Let's look at .NET Framework 1.0 first:
A delegate declaration looks like this:

 public delegate double NetWageCalculation(Employee emp);  


Next, you need a method with a signature that is corresponding to the delegate declaration:

 public double GetTaxReducedWage(Employee emp)  
 {  
   return emp.GrossWage * 0.8;  
 }  


Delegates are classes (derived from System.MulticastDelegate), so they can be instantiated:

 NetWageCalculation calc = new NetWageCalculation(GetTaxReducedWage);  


With .NET Framework 2.0 the C# compiler shipped with a feature called "delegate inference". That gives you the possibility to omit the instantiation with "new". Because you mention the delegate once ("NetWageCalculation calc = ..."), the compiler automatically recognizes that you want to new up a delegate:

 NetWageCalculation calc = new NetWageCalculation(GetTaxReducedWage);  


Moreover, this version of the Framework supported anonymous methods. With anonymous methods the amount of code that had to be written for instantiation of delegates was reduced because no separate methods had to be coded any more:

 NetWageCalculation calc3 = delegate(Employee emp) { return emp.GrossWage * 0.8; };  


The next step in evolution of delegates were lambda expressions introduced in .NET Framework 3.5. Lambdas are simplifying anonymous methods:

 NetWageCalculation calc3 = delegate(Employee emp) => { return emp.GrossWage * 0.8; };  


Examining closely, we will realize that there is still a lot of "ceremony" in the the code snipped above. Two features allow further simplification:
"Delegate type inference" gives you the possibility to omit the types of parameters ("Employee" in that case) - the compiler automatically detects the types of the passed parameters.
"Return type inference" gives you the possibility to omit the return statement.

 NetWageCalculation calc3 = (Employee emp) => { return emp.GrossWage * 0.8; };  

Again without disturbing strike-throughs:

 NetWageCalculation calc3 = emp => emp.GrossWage * 0.8;  






Thursday, February 7, 2013

Implicite transactions in SQL Server

Assume you are "quickly" fireing a single statement like this within SQL Server Management Studio:

 UPDATE Person SET Birthdate = '1976-09-20'  

Dependend on the number of rows in the Person table and the overall performance of your database server this statement will run for some amount of time. During it runs you have the possibility to stop it hitting the "Cancel Executing Query" button.

Why can you cancel the query without having any of the Birthdates updated?
This works because internally, SqlServer is performing single statements in a transaction, even if you did not specify this explicitely. So the above statement is actually the same as

 BEGIN TRANSACTION  
 UPDATE Person SET Birthdate = '1976-09-20'  
 COMMIT  


Friday, January 18, 2013

Good practises for unit testing

When performing unit test (no matter which language, the same applies to e.g. javascript unit tests) make sure your tests are fullfilling the some quality criteria.

Unit tests should...


... be deterministic: Assert.Equals(Random.Next(), myResultingNumber) is probably a bad idea ;-)

... be repeatable: you should be able to run them 1, 100 or 1000 times in a row, the results shall always be the same.

... be order independent: running TestB before TestA shouldn't have any influence.

... be isolated: strive for not using external systems like databases or services, use a mocking framework instead. Reason: doing not so will make it hard to fulfill some of the other principles listed here, e.g. "fast", "easy to setup", "deterministic" (think about a temporary network problem when connecting a test database).

... run fast: slow tests will decrease your productivity and they will be run fewer times because no one likes waiting

... be included in continuous integration process: don't rely on developers manually triggering of the tests, they should be run automatically (as often as possible).

... be easy to setup: the danger in hard to setup tests is that they are simply not written.

... be either atomic or integration tests: atomic tests (i.e. tests that cover a very specific, small amount of functionality) are a must, integration tests (covering the collaboration of multiple modules) are not always necessary but sometimes useful. The disadvantage in integration test is that in case of failing tests, problems are harder to find whereas a failing atomic unit test often even does not have to get debugged to find the problem. Do not mix both types but make a clear separation (e.g. by introducing naming conventions).

... have one logical assert per test: does not mean you should never have multiple asserts in your test case, but if this is the case make sure the asserts are tightly logically connected to each other.

... concentrate on the public "API" of your SUT (which normally covers private methods. Note that the need of testing private methods is often an indicator for violation of SRP within the class).

... read like documentation for your system: benefit from your test suite also in a way that it is an additional documentation for your software. Actually, a system without unit tests cannot be conidered as being "valid": it might be free from obvious bugs (such as users get error messages), but that does not always mean that it works as it should (and often other documentation - if available at all - is far away from being as precise as unit tests in describing desired behavior).

... have the same code quality as productive code: there is NO reason for neglecting unit test code. It will grow like productive code grows, you will get the same problems as with your productive code if you are not applying the same patterns and practises.

... also cover the "sad" path, not only the "happy" path: also test unexpected values and behavior including tests for exceptions.

... be written each time a bug in development, testing or on your live system is occuring. Like this you make sure that this bug is abandoned forever.





Sunday, January 6, 2013

TFS build process templates vs MSBuild

The introduction of build process templates (implemented with WWF / XAML) with Team Foundation Server 2010 did not mean the end of MSBuild scripts. After all every .csproj or .vbproj project file Visual Studio generates during creation of new projects is a MSBuild script.
WWF build process templates provide a higher level orchestration layer on top of the core build engine MSBuild and has some more sophisticated possibilities that are coming with WWF, e.g. distribute a process across multiple machines and to tie the process into other workflow-based processes.
But still, a lot of steps you want to have within your project specific build (e.g. Stylecop analysis, NDepend static code analysis, script and style bundling and minification) can be realized in both ways. So the question arises which way to go: WWF or MSBuild.

I found an guideline from Jim Lamb (who is a TFS programm manager at Microsoft) how to handle this:
MSBuild is the tool of choice in the following scenarios:

1) the task requires knowledge of specific build inputs or outputs
2) the task is something you need to happen when you build in Visual Studio (so for example you have to decide if you want to have a StyleCop check for every local build or only after check-in)

Jim's recommondation is to use WWF in all other cases.

In my opinion the WWF approach has also it's downsides:
1. While it is quite simple to let an MSBuild script run on a developers machine (e.g. for debugging a build problem) this isn't so simple with the WWF solution (you had to install TFS build service locally).
2. The WWF approach can not be reused when your organization switches from TFS to another ALM platform (e.g. Subversion and TeamCity).
3. You have to know not only how MSBuild works but also have to have a clue at least of the basics of the WWF stuff.

When leveraging MSBuild, keep in mind that from a maintenance and reuse perspective it is better to create additional MSBuild files (can be referenced by "import" statements) rather than writing the additional task directly into the project files (they are already containing enough stuff).






What happens when you click "Build Solution" in Visual Studio?

You probably know that msbuild.exe is somehow involved when you click "Build Solution" from the "Build" menu within Visual Studio.
But msbuild.exe is not called directly, instead Visual Studio does the same as you would call "devenv.exe /build" from the command prompt. The executable has to be passed the name of the solution together with the desired solution configuration.
devenv.exe is more or less a wrapper that calls msbuild.exe with a set of properties that are visual studio specific.
Note that devenv.exe only comes with an installed Visual Studio, msbuild.exe is (easier) available with the .NET Framework installation.