Posted on: March 26th, 2013 by jsolutions No Comments

As you will see from some of my previous posts, along with developing in C++, I have also had a fair amount of experience of WPF and C#. Whilst working with these technologies, I have generally employed a Dependency Injection model rather than a Service Locator pattern, as I believe this tends to make dependencies more explicit. Whilst the same can be achieved with a Service Locator patterm, often it is an excuse for a global property bag that can be called upon in any part of the code, this almost always leads to dependencies being hidden. Whilst developing some projects in C++, I have on occasion slipped into the sloppy practice of using a Service Locator badly and it got me thinking about implementing an IOC container for dependency injection in C++, using Variadic Templates, not only as a means to solving a problem I had, but also as a way of looking into Variadic Templates under VS2012. These features of C++ 11 are only available in the November 2012 CTP VC++ download.

Service Locator and Dependency Injection

There has been so much written comparing the two, even to the point of declaring a Service Locator an anti-pattern, which I believe is a little harsh. As such, I am not going to write too much about it here, but point you to a few articles:

Inversion of Control Container and Dependency Injection, Martin Fowler

The Service Locator Pattern, MSDN

Service Locator is an Anti Pattern, Mark Seemann

Service Locator is not an Anti Pattern, J Gauffin

Make of them what you will :)

Variadic Templates in C++ 11

Anyone working in C or C++ will be aware of variadic functions; functions that can take a varargs, such as printf:

int printf ( const char * format, ... );

The C++ 11 standard extends this concept to templates, allowing templates to have a variable number of template parameters:

template <typename... Ts>
int safe_printf(const char* f, const Ts&... ts);

A good introduction to them, by Andrei Alexandrescu can be found here:

Variadic Templates are Funadic (Going Native 2012), Andreio Alexandrescu.

As we shall see, as I develop my idea for an IOC Container for Dependency Injection, variadic templates, along with lambda functions, are the ideal mechanism to implement what I am after.

My Dependency Problem

This example is a simplified example, but imagine I have a dependency graph a great deal bigger than the one I’m describing, also imagine that these classes are written a little better!. I have four classes as follows:

class One
{
public:
    One(){}
    virtual ~One(){}
    std::string getOutput() const {  return message_;  }
    std::string message_;
};
class Two
{
public:
    Two(){}
    virtual ~Two(){}
    std::string getOutput() const {  return message_;  }
    std::string message_;
};
class Three
{
public:
    Three(){}
    virtual ~Three(){}
    std::string getOutput() const {  return "IOC Created";  }
};
class DependentClass
{
public:
    DependentClass(OnePtr one, TwoPtr two, ThreePtr three) 
      : one_(one), two_(two), three_(three){}
    void output() const 
    { 
        std::cout << "done it" << std::endl; 
        std::cout << "one - " << one_->getOutput() << std::endl; 
        std::cout << "two - " << two_->getOutput() << std::endl; 
        std::cout << "three - " << three_->getOutput() << std::endl; 

    }
private:
    DependentClass(){}
    OnePtr one_;
    TwoPtr two_;
    ThreePtr three_;
};

It is quite clear from the code above that DependentClass is dependent on the other three classes.  In my scenario, I want the One class to be a singleton, I want the Two class to be declared outside any kind of factory method and the Three and DependentClass to be managed inside the IOC container. What does this mean in terms of code? Well imagine the code being something like this (well, actually exactly like this):

IOCContainer container;

container.RegisterSingletonClass<One>();
OnePtr one = container.GetInstance<One>();
one->message_ = "Singleton";

TwoPtr two(new Two());
two->message_ = "Registered Instance";
container.RegisterInstance<Two>(two);

container.RegisterClass<Three>();
container.RegisterClass<DependentClass, One, Two, Three>();

DependentClassPtr instance = container.GetInstance<DependentClass>();

instance->output();

I want to register a singleton instance of One against my IOC container, with the assurance that if an instance of One is used, it is always the same one.  I also want to register an instance of Two that will also be used when needed. In reality this functionality ends up being very similar, although it does have a slight semantic difference.

I then want to register a Three with no dependencies and DependentClass with the three dependencies that is does have.

You will also notice I am using Ptr classes here. These are just typedefs for smart pointers to the various classes:

typedef std::shared_ptr<One> OnePtr;
typedef std::shared_ptr<Two> TwoPtr;
typedef std::shared_ptr<Three> ThreePtr;
typedef std::shared_ptr<DependentClass> DependentClassPtr;

An IOC Container with Variadic Templates in C++

So now I have outlined what I am trying to achieve, lets have a look at the implementation of the IOCContainer.

 

class IOCContainer
{
private:
    class IHolder
    {
    public:
        virtual void noop(){}
    };

    template<class T>
    class Holder : public IHolder
    {
    public:
        std::shared_ptr<T> instance_;
    };

    std::map<std::string, std::function<void*()>> creatorMap_;
    std::map<std::string, std::shared_ptr<IHolder>> instanceMap_;

public:

    template <class T, typename... Ts>
    void RegisterSingletonClass()
    {
        std::shared_ptr<Holder<T>> holder(new Holder<T>());
        holder->instance_ = std::shared_ptr<T>(new T(GetInstance<Ts>()...));

        instanceMap_[typeid(T).name()] = holder;
    }

    template <class T>
    void RegisterInstance(std::shared_ptr<T> instance)
    {
        std::shared_ptr<Holder<T>> holder(new Holder<T>());
        holder->instance_ = instance;

        instanceMap_[typeid(T).name()] = holder;
    }

    template <class T, typename... Ts>
    void RegisterClass()
    {
        auto createType = [this]() -> T * {
            return new T(GetInstance<Ts>()...);
        };

        creatorMap_[typeid(T).name()] = createType;
    }

    template <class T>
    std::shared_ptr<T> GetInstance()
    {
        if(instanceMap_.find(typeid(T).name()) != instanceMap_.end())
        {
            std::shared_ptr<IHolder> iholder = instanceMap_[typeid(T).name()];

            Holder<T> * holder = dynamic_cast<Holder<T>*>(iholder.get());
            return holder->instance_;
        }
        else
        {
            return std::shared_ptr<T>(static_cast<T*>
                                       (creatorMap_[typeid(T).name()]()));
        }
    }

};

Lets go through what is an initial solution to this problem. I won’t pretend this is the most robust solution, but hopefully it will give you some idea of where to start, but also give a brief introduction to some features of C++ 11.

Firstly, I need a couple of collections to represent the registry, a collection of registered instances and singletons and a collection of creator functions:

std::map<std::string, std::function<void*()>> creatorMap_;
std::map<std::string, std::shared_ptr<IHolder>> instanceMap_;

These two collections use a couple of features that are part of the new C++ 11 standard: function objects and shared pointers. I won’t go into detail here as good references can be found elsewhere:

Polymorphic Wrappers for Function Objects.

C++ Smart Pointers.

You will also notice a Holder interface and template class:

class IHolder
{
public:
    virtual void noop(){}
};

template<class T>
class Holder : public IHolder
{
public:
    std::shared_ptr<T> instance_;
};

I want the classes that are registered with the IOC Container to be independent of a specific interface, but unfortunately std library container require the contained class to be the same type. By providing an interface with a no-op method (can’t remember why this was neccesary?) and a template class that implements that interface I can provide a useful wrapper for an instance of a class that can be held in a container, such that the class contained can really be anything.

You will also notice that the registered creator functions return a void pointer. I am not completely happy about this, it requires some internal casting that might not be the best C++ code, but solves the problem for me. Suggestions for better ways of doing this are very welcome :)

So we now have the basic internal types and collections to represent the data contained in the registry, lets have a look at some of the methods on the IOC container.

Firstly the simplest method, for registering existing instances of a class:

template <class T>
void RegisterInstance(std::shared_ptr<T> instance)
{
    std::shared_ptr<Holder<T>> holder(new Holder<T>());
    holder->instance_ = instance;

    instanceMap_[typeid(T).name()] = holder;
}

This does nothing new to C++ 11. It simply creates a new shared pointer to the shared pointer instance (mmm!) and adds it to the registry using the name from the typeid as the key. This could easily be extended to allow for multiple named registrations as well.

Now lets have a look at the RegisterClass method, which uses a lot of C++ 11 features:

template <class T, typename... Ts>
void RegisterClass()
{
    auto createType = [this]() -> T * {
        return new T(GetInstance<Ts>()...);
    };

    creatorMap_[typeid(T).name()] = createType;
}

Firstly we have a Variadic Template method that can take a variable number of template parameters. This allows classes to be registered with dependencies on 0 or more other types. Our creator functions are then declared as a lambda function createType, with an inferred type using the auto keyword. The creator method expands the template parameters using the GetInstance<>() method to create a new instance of T.

The RegisterSingleton method is similar:

template <class T, typename... Ts>
void RegisterSingletonClass()
{
    std::shared_ptr<Holder<T>> holder(new Holder<T>());
    holder->instance_ = std::shared_ptr<T>(new T(GetInstance<Ts>()...));

    instanceMap_[typeid(T).name()] = holder;
}

Although rather than adding a creator function, we just create an instance directly and add it to the registry of instances.

All that leaves now is the GetInstance method:

template <class T>
std::shared_ptr<T> GetInstance()
{
    if(instanceMap_.find(typeid(T).name()) != instanceMap_.end())
    {
        std::shared_ptr<IHolder> iholder = instanceMap_[typeid(T).name()];

        Holder<T> * holder = dynamic_cast<Holder<T>*>(iholder.get());
        return holder->instance_;
    }
    else
    {
        return std::shared_ptr<T>(static_cast<T*>(creatorMap_[typeid(T).name()]()));
    }
}

First we check to see if we have an instance registered and return that instance. We have to do a fairly safe C++ cast here to cast the holder to the right type. If there is no registered instance then we create one using the creator registered for that type and use a slightly more hairy cast to get a pointer to the correct type to return.

Summary

Whils this solution has some flaws, particularly in error checking, hopefully it can be seen how new features of C++ 11 can be used to provide a reasonably elegant solution to the problem of Dependency Injection using an IOC Container.

source code

Tags: ,
Posted in C++, CodeProject | No Comments »



Posted on: January 26th, 2013 by jsolutions No Comments

lvalues and rvalues and lvalue reference types and rvalue reference types are a fairly reliable path to insanity….. or are they?

This post is an attempt to cement a few things in my mind as well as explain to those who are interested what on earth is going on with lvalues, rvalues and references. If after reading this you are none the wiser then I strongly advise you go and check out Scott Meyers talk on Universal References on Channel 9. He can explain it a lot better than I can.

lvalue vs rvalue

Ok, so first a what is an lvalue and what is an rvalue? My understanding is this:

An lvalue is an assigned value and an rvalue is a non assigned value. In this example x is an lvalue and 10 is an rvalue:

int x = 10;

Also, in this case the result of a + b is an rvalue:

int x = a + b;

“So what?”, I hear you ask. Well it turns out that move semantics make use of this terminology to provide a type that can be readily identified as one that can be moved rather than copied. An rvalue is by it’s nature transient and it is this transience that provides the hint that perhaps we can use this feature to assist in move semantics.

Move Semantics

The best example to explain why move semantics are so important is a large collection of objects such as a vector or some other list. If we are passing vectors around by value, every time we assign an instance, we call the copy constructor on that vector and each element is allocated and assigned in the new vector. Sometimes, we are not interested in the state of the original vector and the overhead could prove unacceptable. It would be much more efficient to move the allocated memory from one vector to the other and performance would be improved significantly.

We could do this ourselves, but C++ 11 now has a feature, rvalue references, that allows us to identify situations where we could do this.

The reason it is called Move Semantics, as I understand it, is because the moving described above can be done without rvalue references. Rvalue references are just a means to allow differentiation and therefore overloading, in order to provide both copy and move constructors, assignment operators and other methods.

So how about some examples.

An Example using Move Semantics

So lets put together a quick example class:

class Example
{
public:
    Example(const Example& other)
    {
        mList = other.mList;
    }

    Example(Example&& other)
    {
        mList = std::move(other);
    }

private:
    std::vector<std::string> mList;   
};

The two constructors include a standard copy constructor and a move constructor using an RVALUE REFERENCE TYPE – Example&&. Note that the ‘&&’ is an individual token on its own rather than two & tokens as far as the compiler is concerned. We also use std::move to convert other to an rvalue as due to it being an assigned variable is an lvalue of TYPE = RVALUE REFERENCE TO AN Example OBJECT. This is important to get your head round.

There is also nothing stopping you write std::move in the copy constructor but semantically that would not make sense in a COPY constructor. A MOVE constructor is made possible by the new rvalue reference type, permitting overloading and allowing move SEMANTICS.

So if you are using named variables that are ALWAYS lvalues how do we get an rvalue. As I mentioned earlier an rvalue is a transient value. The call to std::move RETURNS AN RVALUE REFERENCE TO AN RVALUE. Infact, std::move is the standard way to convert an lvalue to an rvalue and hints at the possibility that other may change as it can now be moved.

So, to summarise, moving can obviously be done without rvalue reference types, but rvalue reference types provide a distinction that allows for standardised move semantics that are used throughout the standard library and can be used to optimise parts of code where copying would naturally take place.

Further Insanity Inducing Rvalue References

Hopefully, some of what I have said makes some form of sense and it is clear at least that there is a new reference construct that can help provide a standard way of optimisation via move semantics. Now we are going to throw templates and, more generally, type deduction into the mix. Type deduction happens in templates, with the auto keyword and in a few other places in C++. I’m going to just look at templates, hopefully it will indicate how the other features are affected by this.

Lets create a couple of template functions:

template<T>
void doSomething(const std::vector<T>& withThis);

template<T>
void doSomething(std::vector<T>&& withThis);

We now have two overloaded function that can take advantage of scenarios that might require optimisation.

We can call these as follows:

std::vector<std::string> v;

doSomething(v);
doSomething(std::move(v));

The first call uses the first template function  and the second call, as it is getting an rvalue via std::move, is calling the second template function. Quite simple really – the std::move call tells the reader of the code that doSomething will probably also ‘do something’ to v and we should probably not expect to use v beyond the call to doSomething.

Now what if I had written the following template functions instead:

template<T>
void doSomething(const T& withThis);

template<T>
void doSomething(T&& withThis);

It will probably surprise you that in the code above where doSomething is called, they will BOTH call the second template function.

If I was to call the function as follows, the first override would be called:

const std::vector<std::string> v;

doSomething(v);

“Eh?” I hear you ask.

The important thing here is TYPE DEDUCTION.  the first example of the doSomething functions, the type is a std::vector<T>& the T is deduced, BUT the std::vector is not.

In our new example of the doSomething functions, we do have a deduced type T. Deduced types can be const, non const, rvalue references, lvalue references, in fact anything. So in our new cases, the compiler INTERNALLY, created the following instantiated function calls from the template:

void doSomething(std::vector<std::string>& && withThis);
void doSomething(std::vector<std::string>&& && withThis);

These statements are illegal in written code, but internally that is how the compiler sorts out type deduction. So, you may ask, “What the hell is the type once it makes its way to the function body???”. This is obviously an important question as the semantics of the type used to call the function are blurred if not careful.

Well it turns out that the references collapse, following to the following rules:

T& & -> T&
T&& & -> T&
T& && -> T&
T&& && -> T&&

Still sane?

So what we are effectively telling the compiler to instantiate in the way of function calls is:

void doSomething(std::vector<std::string>&  withThis);
void doSomething(std::vector<std::string>&& withThis);

This means that the body of the template function could be recieving an lvalue reference or an rvalue reference type, so we have to accept that semantically, it should not assume that the caller expects move semantics to come into play. So if we actually moved withThis, using std::move for example, the state of the application might not be what the caller expected it to be after the call to doSomething.

Fortunately, the standard library comes to our aid with std::forward to allow forwarding of the object to a non deduced function. I think I will leave it there for now, perhaps come back to std::forward some other time. There is quite a good explanation in this thread, of how std::forward works.

 

If you are still none the wiser, watch what Scott Meyers has to say about it, he’s a bit more of an expert than me :)




Posted on: January 19th, 2013 by jsolutions No Comments

Many of you will know about the async features available with .NET 4.5 and WinRT and how Microsoft are pushing their new Asynchronous methods in their API as a means to improve UI response and user experience generally.

Recently, I want to a talk given by Liam Westley and hosted by the GL.NET user group, on using these new features, mainly centered around task concurrency, throttling and various other feastures and patterns, mainly taken from a Microsoft white paper found here.

Liam’s blog, which contains source code, video presentations and power point slides can be found here. If you are looking for a good starting point to how to use new async features effectively, this is a great starting point.

Thanks Liam and the various people that make GL.NET happen.




Posted on: January 17th, 2013 by jsolutions 9 Comments

The Problem

In object oriented programming, it is very common to have a scenario where a large
number of classes share the same base class and the particular implementation needs
to be created at runtime based on some specific parameters, for example a class name
held in a string.

Standard C++ does not provide the same type of reflection mechanism that other languages
use to achieve this, such as C# :

System.Reflection.Assembly.GetExecutingAssembly()
      .CreateInstance(string className)

or in java:

Class.forName(className).getConstructor(String.class).newInstance(arg);

However, C++ does not allow us to do such things; we have to come up with another solution.
The basic pattern, or set of patterns, that help us achieve this are factory patterns.

A Simple Solution

The Base Class

Our base class is defined as an abstract class as follows:

#ifndef CPPFACTORY_MYBASECLASS_H
#define CPPFACTORY_MYBASECLASS_H

class MyBaseClass
{
public:
    virtual void doSomething() = 0;
};

#endif // CPPFACTORY_MYBASECLASS_H

The Factory Class

A factory method can then be defined as a static method that can be used to create
instances of MyBaseClass. We could define this as a static method on MyBaseClass
itself, although it is generally good practice in object oriented development that a class
serves a single purpose. Therefore, lets create a factory class:

#ifndef CPPFACTORY_MYFACTORY_H
#define CPPFACTORY_MYFACTORY_H

#include "MyBaseClass.h"
#include <memory>
#include <string>

using namespace std;

class MyFactory
{
public:
    static shared_ptr<MyBaseClass> CreateInstance(string name);
};

#endif // CPPFACTORY_MYFACTORY_H

The factory method is expected to create an instance of a class named name that is derived
from MyBaseClass and return it as a shared pointer, as it will relinquish ownership
of the object to the caller.

We shall return to the implementation of the method shortly.

Some Derived Classes

So lets implement a couple of derived classes:

#ifndef CPPFACTORY_DERIVEDCLASSONE_H
#define CPPFACTORY_DERIVEDCLASSONE_H

#include "MyBaseClass.h"
#include <iostream>
using namespace std;

class DerivedClassOne : public MyBaseClass
{
public:
    DerivedClassOne(){};
    virtual ~DerivedClassOne(){};

    virtual void doSomething() { cout << "I am class one" << endl; }
};

#endif // CPPFACTORY_DERIVEDCLASSONE_H

and

#ifndef CPPFACTORY_DERIVEDCLASSTWO_H
#define CPPFACTORY_DERIVEDCLASSTWO_H

#include "MyBaseClass.h"
#include <iostream>
using namespace std;

class DerivedClassTwo : public MyBaseClass
{
public:
    DerivedClassTwo(){};
    virtual ~DerivedClassTwo(){};

    virtual void doSomething() { cout << "I am class two" << endl; }
};

#endif // CPPFACTORY_DERIVEDCLASSTWO_H

A First Attempt at the Factory Method

A simple solution to the implementation of the factory method would be something like
this:

#include "MyFactorySimple.h"

#include "DerivedClassOne.h"
#include "DerivedClassTwo.h"

shared_ptr<MyBaseClass> MyFactory::CreateInstance(string name)
{
    MyBaseClass * instance = nullptr;

    if(name == "one")
        instance = new DerivedClassOne();

    if(name == "two")
        instance = new DerivedClassTwo();

    if(instance != nullptr)
        return std::shared_ptr<MyBaseClass>(instance);
    else
        return nullptr;
}

The factory determines which concrete class to create and has knowledge of every class
via the class headers.

Running the application

A simple main function is now needed so that we can test our implementation:

#include "MyFactorySimple.h"

int main(int argc, char** argv)
{
    auto instanceOne = MyFactory::CreateInstance("one");
    auto instanceTwo = MyFactory::CreateInstance("two");

    instanceOne->doSomething();
    instanceTwo->doSomething();

    return 0;
}

A Visual Studio Project (SimpleFactory.vcxproj) is included with the source code accompanying
this article which can be built and run giving the following output:

I am class one
I am class two

Problems with the Simple Solution

On the surface this looks like a good solution and it possibly is in some cases. However,
what happens if we have a lot of classes deriving from MyBaseClass? We keep having
to add the includes and the compare – construct code.

The problem now is that the factory has an explicit dependency on all the derived
classes, which is not ideal. We need to come up with a better solution; one that removes
the need for constantly adding to the MyFactory::Create. This is where the idea of a
registry of factory methods can help us.

A Revised Factory Class

One of our main objectives is to remove the dependencies on the derived classes from
the factory. However, we still need to allow the factory to trigger the creation of instances.
One way to do this is for the main factory class to maintain a registry of factory
functions that can be defined elsewhere. When the factory class needs to create an instance
of a derived class, it can look up the factory function in this registry. The registry
is defined as follows:

map<string, function<MyBaseClass*(void)>> factoryFunctionRegistry;

It is a map, keyed on a string with values as functions that return a pointer to an instance
of a class based on MyBaseClass.

We can then have a method on MyFactory which can add a factory function to the registry:

void MyFactory::RegisterFactoryFunction(string name,
function<MyBaseClass*(void)> classFactoryFunction)
{
    // register the class factory function
    factoryFunctionRegistry[name] = classFactoryFunction;
}

The Create method can then be changed as follows:

shared_ptr<MyBaseClass> MyFactory::Create(string name)
{
    MyBaseClass * instance = nullptr;

    // find name in the registry and call factory method.
    auto it = factoryFunctionRegistry.find(name);
    if(it != factoryFunctionRegistry.end())
        instance = it->second();

    // wrap instance in a shared ptr and return
    if(instance != nullptr)
        return std::shared_ptr<MyBaseClass>(instance);
    else
        return nullptr;
}

So how do we go about registering the classes in a way that keeps dependencies to a
minimum? We cannot easily have instances of the derived classes register themselves
as we can’t create instances without the class being registered. The fact that we need the
class registered, not the object gives us a hint that we may need some static variables
or members to do this.

I stress that the way I am going to do this may not be the best in all scenarios. I am deeply
suspicious of static variables and members, as static initialisation can be a minefield.
However, I will press on, as the solution serves the purpose of this example and it is up
to the reader to determine whether a solution they use needs to follow different rules
and design.

Firstly we define a method on MyFactory to obtain the singleton instance:

MyFactory * MyFactory::Instance()
{
    static MyFactory factory;
    return &factory;
}

We cannot call the following from the global context:

MyFactory::Instance()->RegisterFactoryFunction(name, classFactoryFunction);

I have therefore created a Registrar class that will do the call for us in it’s constructor:

class Registrar {
public:
    Registrar(string className, function<MyBaseClass*(void)> classFactoryFunction);
};
...
Registrar::Registrar(string name, function<MyBaseClass*(void)> classFactoryFunction)
{
    // register the class factory function 
    MyFactory::Instance()->RegisterFactoryFunction(name, classFactoryFunction);
}

Once we have this, we can create static instances of this in the source files of the derived
classes as follows (DerivedClassOne):

static Registrar registrar("one",
[](void) -> MyBaseClass * { return new DervedClassOne();});

As it turns out, this code can be duplicated in all derived classes so a quick pre processor
define as follows:

#define REGISTER_CLASS(NAME, TYPE) \
    static Registrar registrar(NAME, \
        [](void) -> MyBaseClass * { return new TYPE();});

This uses the new C++ lambda support to declare anonymous functions. We then
only need add the following to each derived class source file:

REGISTER_CLASS("one", DerivedClassOne);

Update 25th January 2013

We Can Do Better …

Although the #define solution provides a neat implementation we could probably do this in a bit more of a C++ style by converting the Registrar class into a template class as follows:

template<class T>
class Registrar {
public:
    Registrar(string className)
    {
        // register the class factory function 
        MyFactory::Instance()->RegisterFactoryFunction(name,
                [](void) -> MyBaseClass * { return new T();});
    }
};

And now we can replace the use of the macro by

static Registrar<DerivedClassOne> registrar("one");

We now have a function registry based factory class defined and the main function can
now be slightly modified as follows:

#include "MyFactory.h"

int main(int argc, char** argv)
{
    auto instanceOne = MyFactory::Instance()->Create("one");
    auto instanceTwo = MyFactory::Instance()->Create("two");

    instanceOne->doSomething();
    instanceTwo->doSomething();

    return 0;
}

We can now build and run the project and get the following output:

I am class one
I am class two

References:

Factory (software concept) – Wikipedia: http://http://en.wikipedia.org/wiki/Factory_(software_concept)

Factory Method Pattern – Wikipedia: http://http://en.wikipedia.org/wiki/Factory_method_pattern

Abstract Factory Pattern – Wikipedia: http://http://en.wikipedia.org/wiki/Abstract_factory_pattern

C++ 11 – Wikipedia: http://en.wikipedia.org/wiki/C%2B%2B11#Lambda_functions_and_expressions)

Source Code




Posted on: October 31st, 2012 by jsolutions No Comments

I had a great session this morning with some sixth formers at Ribston High School in Gloucester this morning, introducing them to agile methods of developing software and comparing them to linear “waterfall” methods. We used SputnikAir, a fictitious airline wanting to take over the air transport world with a fleet of 10000 paper aeroplanes as a basis on which to develop the scrum method, artifacts and roles.

From a backlog of 10000 planes, we started iterations with myself as a Product Owner, Mikki, their teacher, as a Scrummaster and the students as the Team. After estimating the number of planes they could make in 2 minutes they set about making them before A Sprint Review was held t approve them and a retrospective was held to look at what was going well or badly and what could be improved.

Lots of principles were introduced and discussed, such as the role of the Scrummaster as a facilitator rather than a leader and the need to communicate with the Product Owner. We also looked at the impact of changing requirements and how it could impact a linear process.

I think everyone enjoyed it, I certainly did. As I left the classroom with Mikki, 100 or so paper aeroplanes were making their way to the recycling box ….. indirectly ….. actually it was debatable whether they were going anywhere near the recycling box!

SputnikAir is ready to take over the world! :)

 




Posted on: July 19th, 2012 by jsolutions No Comments

The new C++ 11 standard includes many new and interesting features, one of which is Lambda expressions, for anonymous functions. More details on Lambda Expressions can be found here.

The C++ lambda expression syntax is as follows:

[capture] (parameters) -> return-type {body}

We can capture variables outside the scope of the body of the function, pass parameters, declare a return type and a function body very simply.

One simple use of a lambda expression is as a filter predicate for std::find_if. In this example to find all instances of SomeClass with a size less than or equal to a maximum size;

std::vector<SomeClass> myList;
...
int maxSize = 100;
auto pred = [maxSize] (SomeClass * instance ) -> bool {
    return instance->size() <= maxSize;
};
auto results = std::find_if(myList.begin(), myList.end(), pred);

This is just a simple example, which I hope is of help getting started with the lambda syntax, there are many similar uses for this kind of construct.

You will also notice the use of new auto keyword, handy for when the typename could be a little verbose.

 

 

 




Posted on: June 26th, 2012 by jsolutions 6 Comments

Since writing this post yesterday, it has been brought to my attention that this approach may in fact be circumnavigating some of the security constraints imposed on loading local content. The MSDN Link here states the following:

Application data and resources

Sometimes it is useful to refer to resources you have downloaded from the Internet to your app’s local ApplicationData storage (via Windows Runtime APIs). To refer to such content, you must use the scheme “ms-appdata:”, with the path to the file within your ApplicationData local storage. Note that, for security reasons, you cannot navigate to HTML you have downloaded to this location and you cannot run any executable or potentially executable code, such as script or CSS. It is intended for media such as images or videos and the like. Finally, you may also refer to resources that are included in your app, such as localized images, via the scheme “ms-resource:”.

The original article follows ….

Developing Metro apps for Windows 8 is great, until you come across a restriction that appears to serve no purpose and has little in the way of documentation. The WebView control for Windows 8 apps is one of the controls where I have experienced this.

The source code can be found here.

The Problem – Local HTML Resources

The problem is that the webview has no way of loading local resources based on a URL, but rather requires the developer to load the file and use NavigateToString to render the content, which of course loses all base URI context and hence cannot handle images, css, javascript or any other extra resources. I’m still a bit unclear as to why the situation is as it is, but rather than spend time investigating that, I needed to come up with a solution.

The Solution – In Process HTTP Server

In my scenario, I want to be able to view content whilst disconnected, downloading resources for viewing later. A simple solution is to embed a small HTTP server in my metro app and use this to serve up the local content. Metro apps are permitted to communicate to themselves via sockets on the loopback address, so this was an ideal solution. It will also prove useful moving forward as interactions with the content can also be cached whilst disconnected, which is another use case for the application I am working on.

Step 1 – Create a Metro Project with the correct Capabilities defined in the App Manifest.

The example included with the post was created using a new Blank C# Windows Metro App in Visual Studio 2012 RC. And the following Capabilities were added to the App Manifest:

  • Home or Work Networking
  • Internet (Client & Server)
  • Internet (Client) - this probably isn’t needed but was set by default I believe.

This allows the app to open sockets for accepting incoming connections, which is obviously vital for our HTTP server to work correctly.

Step 2 – Prepare Some Content for Serving via HTTP.

I added a simple html file, with a css, js and image file to serve up slightly differing content. These were added to the project as content to be deployed with the application. Equally this could be content that is downloaded prior to serving or documents in a shared part of the disk.

Step 3 – Add WebView control to the MainPage XAML.

As this is just an example app, I didn’t bother with anything beyond hard coding a URL into the WebView markup in the MainPage.xaml file:

<Grid Background={StaticResource ...}>
    <WebView Source="http://localhost:8088/index.html"/>
</Grid>

The port was hardcoded for now.

Step 4 – Create the HTTP Server class.

The HTTP Server class listens on the socket using a StreamSocketListener:

/// <summary>
/// A simple HTTP Server class.
/// </summary>
class HttpService
{
    ...

    // The default port to listen on
    private const string DEFAULT_PORT = "8088";

    // a socket listener instance
    private readonly StreamSocketListener _listener;

    ...

    /// <summary>
    /// create an instance of a HttpService class./>
    /// </summary>
    public HttpService()
    {
        _listener = new StreamSocketListener();

        ...

        // start the service listening
        StartService();
    }

    /// <summary>
    /// Start the HTTP Server
    /// </summary>
    private async void StartService()
    {
        // when a connection is recieved, process
        // the request.
        _listener.ConnectionReceived += (s, e) =>
        {
            ProcessRequestAsync(e.Socket);
        };

        // Bind the service to the default port.
        await _listener.BindServiceNameAsync(DEFAULT_PORT);
    }

    ...
}

When a connection is recieved, the request is extracted. Only the GET verb is supported in this example code:

/// <summary>
/// When a connection is recieved, process the request.
/// </summary>
/// <param name="socket">the incoming socket connection.</param>
private async void ProcessRequestAsync(StreamSocket socket)
{
    StringBuilder inputRequestBuilder = new StringBuilder();

    // Read all the request data.
    // (This is assuming it is all text data of course)
    using(var input = socket.InputStream)
    {
        var data = new Windows.Storage.Streams.Buffer(BUFFER_SIZE);
        uint dataRead = BUFFER_SIZE;

        while (dataRead == BUFFER_SIZE)
        {
            await input.ReadAsync(data, BUFFER_SIZE,
                InputStreamOptions.Partial);

            var dataArray = data.ToArray();
            var dataString = Encoding.UTF8.GetString(dataArray, 0, dataArray.Length);

            inputRequestBuilder.Append(dataString);

            dataRead = data.Length;
        }
    }

    using(var output = socket.OutputStream)
    {
        // extract the request string.
        var request = inputRequestBuilder.ToString();
        var requestMethod = request.Split('\n')[0];
        var requestParts = requestMethod.Split(' ');

        if (requestParts[0].CompareTo("GET") == 0)
        {
            // process the request and write the response.
            await WriteResponseAsync(requestParts[1], socket.OutputStream);
        }
    }
}

The server then checks the file extension against a list of content types, loads the file and posts it back via the socket:

/// <summary>
/// Write the HTTP response to the request out to the output
/// stream on the socket.
/// </summary>
/// <param name="resourceName">The resource name to retrieve.</param>
/// <param name="outputStream">The output stream to write to.</param>
/// <returns>A task object.</returns>
private async Task WriteResponseAsync(string resourceName, IOutputStream outputStream)
{
    using(var writeStream = outputStream.AsStreamForWrite())
    {
        // check the extension is supported.
        var extension = Path.GetExtension(resourceName);

        if(_contentTypes.ContainsKey(extension))
        {
            string contentType = _contentTypes[extension];

            // read the local data.
            var localFolder = Windows.ApplicationModel.Package.Current.InstalledLocation;

            var requestedFile = await localFolder.GetFileAsync("Data" + resourceName.Replace('/', '\\'));
            var fileStream = await requestedFile.OpenReadAsync();
            var size = fileStream.Size;

            // write out the HTTP headers.
            var header = String.Format("HTTP/1.1 200 OK\n" +
                                     "Content-Type: {0}\n" +
                                     "Content-Length: {1}\n" +
                                     "Connection: close\n" +
                                     "\n",
                                     contentType,
                                     fileStream.Size);

            var headerArray = Encoding.UTF8.GetBytes(header);

            await writeStream.WriteAsync(headerArray, 0, headerArray.Length);

            // copy the requested file to the output stream.
            await fileStream.AsStreamForRead().CopyToAsync(writeStream);
        }
        else
        {
            // unrecognised file type, just handle as
            // a not found.

            var header = "HTTP/1.1 404 Not Found\n" +
                         "Connection: close\n\n";
            var headerArray = Encoding.UTF8.GetBytes(header);
            await writeStream.WriteAsync(headerArray, 0, headerArray.Length);
        }
        await writeStream.FlushAsync();
    }
}

The “Connection: close” header is important for correct operation, otherwise the WebView attempts to re-use the sockets which isn’t supported by this code.

Summary

Whether the WebView control will support local URLs when Windows 8 is finally released is doubtful. However this solutions not only solves the problem but offers a lot more potential for disconnected scenarios.

The source code can be found here.




Posted on: May 1st, 2012 by jsolutions No Comments

Welcome to our new blog.

We have finally updated the jSolutions web site and migrated the blog from http://sputnikdev.wordpress.com. All the content is still here so , if you were following the blog on wordpress.com, please follow this for updates on Software Engineering.

jSolutions is a small software engineering consultancy, based in the UK. We specialise in desktop application development, whether that be native/cross platform development, or .NET development. We also have Scrum Alliance accreditation and proven experience in agile principles and practices. Please contact us, if you want to know more or if you have software engineering problems that you believe we may be able to solve. Our contact details are on our main web page.

Most of our posts will be related to areas such as C++ and .NET development, Agile principles and practices and other software engineering subjects. As we learn about new technologies and approaches or just want to share our experiences we will post articles here.

 




Posted on: March 29th, 2012 by jsolutions 1 Comment

Recently a blog article was posted by William Edwards, outlining quite harshly why he felt that agile, or should I say Agile, was not only a waste of time but not conducive to producing good software. There are a significant amount of responses to his post so I’d thought I’d put mine here.

How agile can hinder

A lot of theory for agile process methodology comes from manufacturing, especially kanban. They are all generally geared towards maximising throughput. But when the workers are people rather than machines there is a danger that the process becomes all about grinding out features at any cost. If this becomes the focus it is almost certainly going to lead to demoralisation and burn out.

However, this is a symptom of a top down imposed agile process, where management hear the benefits of increased throughput and efficiency and presume it will solve problems of lack of productivity. Expecting this of any process is perhaps a bit shortsighted.

Scrum, in particular, also has a focus on all team members being equal in role, and can design, develop and test with equal responsibility. There is an obvious danger in this which is highlighted in the post, that specific individual expertise may be lost amongst the levelling of skills in a team. Some even argue that scrum is a process for mediocre teams as there is no room for experts.

How agile can help

Having read the article, I am a little unclear on the alternative that the author is suggesting as he seems to imply that agile is always wrong and his way is always right, I am sure that is not exactly what he intended to imply. However, as some of the replies suggest, any process has to be geared around the project/individual/customer circumstances. If the environment is one where priorities and requirements are changing and timescales are short, then adopting an iterative framework like Scrum may be more appropriate than a model that encourages a lot of up front requirements gathering and design. Not all projects are like that and no process can be globally appropriate.

I believe that Scrum can be said to have succeeded when one no longer needs a scrummaster and maybe even Scrum itself  is no longer followed. Just as deleting code is often more beneficial than writing code, removing process is often more beneficial than adding process. Scrum, like most other agile processes or frameworks encourages everyone, in particular the team to constantly inspect and adapt itself to do better, whatever ‘better’ may mean, which may not always be about just increasing throughput.

Summary

Team A is in a situation where it is delivering good software reliably to the customer, the business is making good margins on the product and the team enjoys working on it and generally all parties are happy,  it shouldn’d need to look for any process to adopt, the one it currently follows, even if only by implication obviously works perfectly.

However, this is very rare indeed and in Team B there are problems with delivering reliable software, profit margins and developer satisfaction, something needs to be done and using a framework such as Scrum to encourage everyone to uncover problems, inspect processes and practices and adapt to improve the situation is a good thing.

Ultimately Team B is aiming to be like the idealistic Team A and an agile approach may help this.

Tags: , ,
Posted in Agile | 1 Comment »



Posted on: March 8th, 2012 by jsolutions No Comments

Yesterday, I got an email stating that my Certified Scrum Professional application had been approved. Good news! But what does this mean and why did I do it?

Firstly, I’m not going into details about the certification as they can be found here. Hopefully this post will briefly highlight possible reasons for getting professional technical certification generally and not just Scrum Alliance certification.

Personal Recognition

Everyone likes to have their work recognised and certification can be a part of this. It is just as motivating getting the recognition and stamp of approval by a professional body as it is getting a pat on the back by your work colleagues or a bonus / promotion from your employer.

A Sign of Commitment to Your Profession

Although it is unlikely that a professional certification will improve your chances of progressing in your career as a measure of experience, what it does show is that you are committed to developing yourself professionally and committed to your profession. You shouldn’t need to use it as a sign of your experience, but rather a sign of your comittment.

A step towards other certifications

Often certification with a particular professional body follows a path that leads to a certification that can pay a role in determining your suitability for a role. with the Scrum Alliance this could be a Trainer or Coach certification.

So there are some reasons why I chose to go for certification, I hope they prove useful to people.