+ All Categories
Home > Documents > TSM_20_2014_en

TSM_20_2014_en

Date post: 19-Jul-2016
Category:
Upload: olivia-knight
View: 219 times
Download: 2 times
Share this document with a friend
Description:
jnlknl
46
TO DAY SOFTWARE Nr. 20 • February 2014 • www.todaysoftmag.ro • www.todaysoftmag.com MAGAZINE Interview with Philipp Kandal Metrics in Visual Studio 2013 Cluj IT Cluster on Entrepreneurship New business development analysis Multithreading in C++11 standard (II) Startups: Evolso Interview with Radu Georgescu Interview with Alexandru Tulai Thinking in Kanban Business pitching or how to sell in 4 simple steps An overview of Performance Testing on Desktop solutions Craftsmanship and Lean Startup How (NOT TO) measure latency
Transcript
Page 1: TSM_20_2014_en

TSM T O D A YS O F T WA R E

Nr. 20 • February 2014 • www.todaysoftmag.ro • www.todaysoftmag.com

M AG A Z I N E

Interview with Philipp Kandal

Metrics in Visual Studio 2013

Cluj IT Cluster on Entrepreneurship

New business development analysis

Multithreading in C++11 standard (II)

Startups: Evolso

Interview with Radu Georgescu

Interview with Alexandru Tulai

Thinking in Kanban

Business pitching or how to sell in 4 simple steps

An overview of Performance Testing on Desktop solutions

Craftsmanship and Lean Startup

How (NOT TO) measure latency

Page 2: TSM_20_2014_en
Page 3: TSM_20_2014_en

6Cluj Innovation Days 2014

Ovidiu Mățan

7Interview with Philipp Kandal

Ovidiu Mățan

9Evolso

Alin Stănescu

11An overview of

Performance Testing on Desktop solutions

Sorin Lungoci

15Software Craftsmanship and

Lean StartupAlexandru Bolboaca și Adrian Bolboacă

17Getting started with Vagrant

Carmen Frățilă

22Startup marketing: challenges

and practical ideas Sorina Mone

24Cluj IT Cluster on E

ntrepreneurship Daniel Homorodean

26Interview with Radu Georgescu

Ovidiu Mățan

28Restricted Boltzmann MachinesRoland Szabo

31Multithreading in C++11 standard (II)Dumitrița Munteanu

34Metrics in Visual Studio 2013 Radu Vunvulea

36How (NOT TO) measurelatency Attila-Mihaly Balazs

39Thinking in KanbanPüsök Orsolya

40New business development analysisIoana Matei

42 Business pitching or how to sell in 4 simple steps Ana-Loredana Pascaru

44Gogu andthe Ship Simona Bonghez, Ph.D.

Page 4: TSM_20_2014_en

4 no. 20/February | www.todaysoftmag.com

The word innovation on the cover of the magazine announces the theme of this issue, startups and innovation. But the word was also chosen to mark symbolically the incoming of a new stage of the local IT, that of innovation. If, a few years ago,

execution and performance were among the most widely employed words, we are now witnessing an unprecedented ascending of the concept of innovation on the Romanian market. This new trend points to an evolution of the Romanian entrepreneur’s mentality, who becomes more and more aware of his capacity to metamorphose from a mere exe-cutant into a product creator. Assuming the new status must take into account a reality synthesized suggestively by Radu Georgescu in an interview during How to Web 2013: outsourcing means selling working hours, whereas producing means selling the same product a thousand times. The piece of advice that he gives to the outsourcing companies is to try to create small teams to develop some products of their own.

The innovation concept may have several forms. A simple search of this word on the site of the magazine, www.todaysoftmag.ro , gives us a few of its possible facets: Innovation in Big projects, Innovation in IT, public-private project, connecting the innovating technologies to the global market, Lean Six Sigma and the management of innovation. These are but a few of the approaches that reveal the endless possibilities of reference of this concept. As an addition meant to prove the above mentioned materi-alization of the innovative spirit, we pass in review the events that have innovation as a main theme: Startup Weekend Cluj – the most important local event dedicated to the creation of new startups. The team designated as winner in 2013 was Omnipaste aka Cloud Copy, which is currently in the Deutsch Telekom Hub:raum accelerator; Startup Pirates – a new event which offers mentorship to those who wish to create a startup; Cluj Innovation Days – organized by Cluj IT Cluster, whose aim is to offer its participants, during the two days, three parallel sessions of presentations on this theme; Innovation Labs 2.0 – a hackaton organized in Bucharest and Cluj. Thus, we also invite you to take part in these events where we hope to see revolutionary ideas which can bring benefits to the Romanian market.

The present edition of the magazine puts forth a series of interviews with Radu Georgescu, Philipp Kandal and Alexandru Tulai, as well as details on the above mentio-ned events. The first technical article of the magazine is on the subject of testing, namely: An Overview on Performance Testing on Desktop Solutions, followed by Craftsmanship and Lean Startup, which proposes two possible development stages of an application from the perspective of an entrepreneur: discovery and implementation. Vagrant is the title of another article on a technical theme, as well as the name of a powerful tool when we have to deal with virtual machines. This issue’s article represents an introduction as well as an overview of its possibilities. Next, there is a new series of technical articles: Restricted Boltzmann Machines, the review of the book Maven, The Definitive Guide and How (not to) measure latency. Startup marketing: challenges and guiding marks offers some pieces of advice to young entrepreneurs on how marketing should be done with limited resources. Another article dedicated to startups is the one entitled Cluj IT Cluster and Entrepreneurship, which presents the involvement of the Cluster in supporting entrepre-neurship. The articles dedicated to management continue with Thinking in Kanban, New business development analysis, Business Pitching - the advertising of today’s entrepreneurs.

Finally, I would like to mention that the 20th issue of Today Software Magazine marks 2 years of existence of the magazine. We thank you for being with us and we promise to carry on with many other editions at least equally interesting as the ones till now!!!

Enjoy your readings !!!

Ovidiu MăţanFounder of Today Software Magazine

Ovidiu Măţan, [email protected]

Editor-in-chief Today Software Magazine

editorial

Page 5: TSM_20_2014_en

5www.todaysoftmag.com | no. 20/February, 2014

Editorial Staf

Editor-in-chief: Ovidiu Mățan [email protected]

Editor (startups & interviews): Marius Mornea [email protected]

Graphic designer: Dan Hădărău [email protected]

Copyright/Proofreader: Emilia Toma [email protected]

Translator: Roxana [email protected]

Reviewer: Tavi Bolog [email protected]

Reviewer: Adrian Lupei [email protected]

Made byToday Software Solutions SRL

str. Plopilor, nr. 75/77Cluj-Napoca, Cluj, [email protected]

www.todaysoftmag.comwww.facebook.com/todaysoftmag

twitter.com/todaysoftmag

ISSN 2285 – 3502ISSN-L 2284 – 8207

Copyright Today Software Magazine

Any reproduction or total or partial reproduction of these trademarks or logos, alone or integrated with

other elements without the express permission of the publisher is prohibited and engage the responsibility of the user as defined by Intellectual Property Code

www.todaysoftmag.rowww.todaysoftmag.com

Authors list

Alexandru [email protected]

Agile Coach and Trainer, with a focus on technical practices@Mozaic Works

Adrian [email protected]

Programmer. Organizational and Technical Trainer and Coach@Mozaic Works

Radu [email protected]

Senior Software Engineer@iQuest

Attila-Mihaly [email protected]

Code Wrangler @ UdacityTrainer @ Tora Trading

Sorina [email protected]

Marketing manager@ Fortech

Daniel [email protected]

Member in Board of Directors@ Cluj IT Cluster

Roland [email protected]

Junior Python Developer@ 3 Pillar Global

Silviu [email protected]

Consultant Java@ msg systems Romania

Dumitrița [email protected]

Software engineer@ Arobs

Püsök [email protected]

Functional Architect@ Evoline

Ioana [email protected]

Project Manager@ Ogradamea

Ana-Loredana [email protected]

Training Manager@ Genpact

Simona Bonghez, [email protected]

Speaker, trainer and consultant in project management,

Owner of Colors in Projects

Ovidiu Măţan, [email protected]

Editor-in-chief Today Software Magazine

Alin Stă[email protected]

Project manager@ Evolso

Sorin [email protected]

Tester@ ISDC

Carmen Frățilă[email protected]

Software engineer@ 3Pillar Global

Page 6: TSM_20_2014_en

TODAY SOFTWARE MAGAZINE

6 no. 20/February, 2014 | www.todaysoftmag.com

interview

Ovidiu Mățan: Cluj IT Cluster organizes on the 20th- 21st of March the second edi-tion of Cluj Innovation Days, event hosted this time by the University of Agricultural Sciences and Veterinary Medicine in Cluj-Napoca. Why Cluj Innovation Days and not Cluj IT Innovation Days, as was the last year’s conference called?

Alexandru Tulai: This year’s edi-tion of Cluj Innovation Days through the topic, guests and structure of the event aims at bringing together the main natio-nal and international stakeholders in the innovation process, decision-makers and individuals and organziations which are interested in changing the paradigm of how we do business and how we educate ourselves, so that we become more orien-ted towards the generation of innovative ideas and products with high added value. This is also the reason why our event, which we would like in time to become one of the major event of its kind,is no longer entitled Cluj IT Innovation Days. The location where it will take place is not meaningless either: we wish to emphasize the importance of the collaboration among researchers, businesses and public authori-ties.. Our vision of the IT industry is one in which IT&C becomes an indispensable infrastructure for development, present in all the verticals of economy and of the society. Cluj Innovation Days is, in my opinion, the type of event through which we can contribute towards a long-term consolidation of the IT community and to facilitate the development of connections

with national and international business partners.

What do you attend to achieve with this year’s edition of Cluj Innovation Days?

Cluj Innovation Days is structu-red on three thematic sessions, ehich are Mastering Innovation, Fostering Entrepreneurship and Showcasing Innovation, organized as such in order to cover the three main constitutive aspects of innovation. The first thematic session regards innovation in a more general context and it is concerned with issues related to handling innovation through the management of the innovating product, intellectual property, support obtained through the European policies and strate-gies, as well as other aspects concerning the capacity to implement and sustain the innovation processes. The second session is oriented towards the entrepreneurship area and it aims at presenting to the public the main mechanisms and steps in the conso-lidation and capitalization of an innovating idea, by direct reference to the ingredients of startups, spinoffs, as well as ways of attracting investments and other availa-ble sources of financing. The last thematic session, Showcasing Innovation, aims at illustrating all the means and mechanisms that sustain innovation, by successful stories from the business area, meant to motivate and inspire young people to develop some entrepreneurial initiatives, but also the small companies that are in an incipient stage of development.

Who are the participants you are expec-ting at this event?

During the two days, the event will reunite over 400 people, from our country and from abroad, among which I would like to mention the representatives of the European Commission, Government of Romania and local public authorities, business poeple, representatives of the academic environment, ambassadors, members of other national and internati-onal clusters, but also representatives of business associations, of the financial-ban-king sector and investors. As guests, we also expect representatives of academia such as universities, the Romanian Academy and research institutes. Due to the great num-ber of participants, but especially to the nature of their pursuits, we can state that, for two days, Cluj will become the regional capital of innovation.

More details on the Cluj Innovation Days 2014 event are available on the web site: www.clujinnovationdays.com

The second edition of Cluj Innovation Days is scheduled to take place on March, 20th and 21st and is the main yearly event organized by Cluj IT Cluster. The President of the Cluster, Mr. Alexandru Tulai answered a few questions about the event, exclusively.

Cluj Innovation Days 2014

Ovidiu Măţan, [email protected]

Editor-in-chief Today Software Magazine

Page 7: TSM_20_2014_en

7www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE interview

Interview with Philipp Kandal

Ovidiu Mățan: Philipp, congratulation for the selling Skobbler to Telenav. Please tell us three product strenghts that made possible this deal.

Philipp Kandal: We have focussed consequently on OpenStreetMap which is comparable to a Wikipedia of maps. This is growing at a very fast pace and on the way to become the most important map in the world. As we have been the most successful company in that area that was the key reason why Telenav wanted to acquire skobbler. Apart from our strong OpenStreetMap technology the main assets which we built was a strong installed user base (> 4 Mio. users) and our unique offline capabilities that allow people to use our products full offline without a connection to the internet. So at the end it was a mix of a great product and an outstanding tech-nology built by a world-class team which made this deal possible.

Probably the most important aspect for the local IT community, how will be affec-ted the Skobbler team by this transaction on short and longer term perspective?

This is a deal about doubling down on our efforts and growing the teams and pro-ducts we’ve built here. So we clearly expect to grow our team in Cluj and with more capital build even more awesome products. We are going to be very aggressive moving forwards with the products we are creating and pushing them into the market i.e. we’ll be expanding in more new regions. Short term you can expect that we’re looking to unify the brands between skobbler and

Telenav, but definitely we’ll continue to focus building outstanding consumer pro-ducts for the future.

We know that you are leading the com-pany from a technical perspective too. How this will change, will you still continue to collaborate with Telenav for a while?

I am very committed to what we’ve built, so I’ll definitely stay as a General Manager of our Cluj offices. Long term I’ll have to see, but everybody who knows me is aware that I am a great supporter of entrepreneurship, so that’s definitely a path I would explore again, and in the mean time I definitely have now the ability to make some more angel investments in Cluj ;).

Now, the question that everyone might looking for. What will you do next? Will you continue to stay in the maps/navigation area or are you planning to do something totally different ?

As said I definitely want to see our pro-ducts grow to tens of millions of users so for the foreseeable future I’ll stay at Telenav. If I would have to start something new at some point it would most likely be in a new area outside of maps / navigation as I am defi-nitely keen to explore a few areas that need to be rethought from ground-up, definitely I wouldn’t do a me-too but something that could be revolutionary.

You had been involved in local events like Startup Weekend and supporting many others locally including our own, IT Days. Also you were supporting and sponsoring local startups like Squirrly. What are you

planning next ? Are you planning to remain in Cluj and build another business?

I truly believe that Cluj is a fantastic place for entrepreneurs, so I’ll definitely stay in Cluj for a long time. I am actually considering to buy an apartment here, so you can expect me here for the long-run. I hope we all can make this place a truly competitive place for entrepreneurs and create some internationally respected com-panies out of Cluj, and I am very keen to try to help in this process in any way that I can..

Lately, we have witnessed a series of acquisitions of local companies. It is also the case of EBS, purchased by NTT Data, Kno by Intel, or Evoline by Accenture. The last transaction of this kind was the purchasing of Skobbler by Telenav. Philipp Kandal, co-founder of Skobbler, had the kindness to answer a few questions for us.

Ovidiu Măţan, [email protected]

Editor-in-chief Today Software Magazine

Page 8: TSM_20_2014_en

TODAY SOFTWARE MAGAZINE

8 no. 20/February, 2014 | www.todaysoftmag.com

s ta rt u p w e e k e n d g l o b a l

s ta rt u p w e e k e n d c l u j

115+countries

556+cities

1200+ events

average of 93attendees per event

average of 10new ventures created

with a maximum of 150

h i s t o rY

2 0 1 2 + 2 0 1 3

20122013

now

250 4.8%

41.6%

53.6%NON TECH

DESIGNERS

DEVELOPERS

WOMEN 19.6%

MEN 80.4%& more juicy facts...

ideas pitched: articles about:

59 95

partners & media:

40

mentors:

28

jury:

19prizes value:

15,800€

connected devices:

580

chocolate:

24kg

energy drinks:

558

teams formed:

27

h o w a b o u t yo u w r i t e s o m e

COME, PITCH,h i s t o ry r i g h t n o w ?

FORM A TEAM& win!

BUT DON'T FORGETTO REGISTER

first!

Page 9: TSM_20_2014_en

9www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

Evolso is created with 3 main technical parts : The mobile app, the website and REST services. The web platform can be accessed at : http://evolso.com. On this site you can find out more informations about what evolso stands for, how it can help you in your daily life and how to use it at maximum, info about team members, events.At the same time there is a visible part only available

to our partners where they can login with an account and create,design events and see statistics of what’s happening inside their loca-tions ( ex :how many users check-ed in, how many will attend the events, etc. ). The mobile app is created for our users and is the main

part of the system. We started on Android, giving the fact that the market share of smartphones with Android in Romania is bigger than other OS.But we have the iOS version too, that it will be launched at the end of this month. The mobile app con-nects to the server using REST services and the responses are in JSON and some of the responses are send with Push Notifications through the devices. Behind the „magic” there is a MongoDB database. We picked out MongoDB because it’s a NoSQL database,easy to grow and uses documents JSON style,it’s open-source and it can have „Out of the box” GeoLocation. MongoDB helps us make different „queries” in the database locations related, an important aspect in the evolso project.

I would like to add that all the percen-tages between the users are done real-time on the server,wich means that MongoDB offers an incredible response speed.

Technical difficulties we encountered when trying to connect two different indi-viduals. The connectivity could’ve been done using messages using different fra-meworks, or using our methods ( wich we already did), or using voice. We wanted to implement VoiP from the first version of the mobile app to offer our users a bonus.To use this service, we used a framework created by (http://www.rebtel.com/). This framework is free and can be used on Andorid and iOS.Other problems that occured from the fragmentation of the OS.In the mobile app appeared different

errors on different OS versions,where the majority where fixed and we will continue to fix them.

startups

Evolso

Alin Stă[email protected]

Project manager@ Evolso

Page 10: TSM_20_2014_en

10 no. 20/February, 2014 | www.todaysoftmag.com

Transylvania Java User GroupCommunity dedicated to Java technologyWebsite: www.transylvania-jug.orgSince: 15.05.2008 / Members: 563 / Events: 43

TSM communityComunitate construită în jurul revistei Today Software Magazine.Website: www.facebook.com/todaysoftmagSince: 06.02.2012 /Members: 1171/Events: 16

Romanian Testing CommunityCommunity dedicated to testersWebsite: www.romaniatesting.roSince: 10.05.2011 / Members: 702 / Events: 2

GeekMeet RomâniaCommunity dedicated to web technologyWebsite: geekmeet.roSince: 10.06.2006 / Members: 572 / Events: 17

Cluj.rbCommunity dedicated to Ruby technologyWebsite: www.meetup.com/cluj-rbSince: 25.08.2010 / Members: 170 / Events: 40

The Cluj Napoca Agile Software Meetup GroupCommunity dedicated to Agile methodology.Website: www.agileworks.roSince: 04.10.2010 / Members: 396 / Events: 55

Cluj Semantic WEB MeetupCommunity dedicated to semantic technology.Website: www.meetup.com/Cluj-Semantic-WEBSince: 08.05.2010 / Members: 152/ Events: 23

Romanian Association for Better SoftwareCommunity dedicated to senior IT peopleWebsite: www.rabs.roSince: 10.02.2011 / Members: 235/ Events: 14

Testing campProject which wants to bring together as many testers and QA.Website: tabaradetestare.roSince: 15.01.2012 / Members: 1025/ Events: 27

February and March are generally marked by events that are dedicated to start-ups and to innovation. Now it’s the time to plan what we are going to do the rest of the year and to get our concepts validated by the public and the mentors attending the events.

Calendar February 13 (Cluj)Launch of issue 20 of Today Software Magazinewww.facebook.com/todaysoftmag

February 15 (Cluj)Hackatonceata.org/evenimente/aniversarea-fundației-ceata-în-cluj

February 17 (Cluj)Question-Answering Systemsmeetup.com/Cluj-Semantic-WEB/events/150657862/

February 20 (Cluj)Machine Learning in Pythonmeetup.com/Cluj-py/events/165522292/

February 22 (Timișoara)Meet the Vloggersit-events.ro/events/meet-vloggers

February 22 (București)Electronic Arts CodeWarsit-events.ro/events/electronic-arts-codewars/

February 22 (Iași)ISTC February 2014 Editionhttp://it-events.ro/events/istc-february-2014-edition/

February 24 (Cluj)Mobile Monday Cluj #5meetup.com/Cluj-Mobile-Developers/events/153087052/

February 27 (București)Gemini Foundry Conf - TSM recommandationwww.gemsfoundry.com

February 28 (Cluj)Startup Weekend Cluj - TSM recommandationcluj.startupweekend.org

March 2 (Cluj)UBBots 2014it-events.ro/events/ubbots-2014

March 20-21 (Cluj)Cluj Innovation Days - TSM recommandationwww.clujinnovationdays.com

IT Communities

communities

Page 11: TSM_20_2014_en

11www.todaysoftmag.com | no. 20/February, 2014

I have chosen to talk about performance testing on desktop solutions because the information available in terms of perfor-mance is pretty limited but nevertheless critical to success. I am considering for my story not only my experience acquired in different applications, coming from indus-tries like finance or e-learning, but also the wisdom of others expressed as ideas, guide-lines, concerns or risk warnings. I hope that my story can make your life somewhat easier and, definitely, more enjoyable when you’re asked to perform such task.

If we speak about the performance of a system, it is important to start from the same definition of what „performance” is. Could it be responsiveness? Resource consumption? Something else?

Performance desktop applications con-text can have different meanings. I will explain them below.

Architecture From architecture point of view, there

are several types of desktop applications. The layers used are quite the same but the place and interaction between them leads to a

different architecture type. Among the most used layers we can recall: UI (User Interface), BL (Business Layer), TL (Transfer Layer) and DB (Database Layer).

Please remember that these don’t cover all combinations between architecture styles and types:

1. 100% pure desktop - the installed application, having user interface, business layer and database on the same machine without the need of a network connection. As an example, think of Microsoft Money (personal finance management software). This is a single tier application that runs in a single system driven by a single user.

2. Another Client/Server solution is to have a thin client used a little more than taking the input from the user and display the information, the application being

testing

Application performance can break or make a business given the direct impact on revenue, customer satisfaction or brand reputation. Praised or criticized, perfor-mance of business-critical applications falls into the top three issues that impact

a business. Currently, the pressure on performance is skyrocketing in a marketplace where demands of application users are getting more varied and complex and definitely, real-time.

An overview of Performance Testing on

Desktop solutions

Sorin [email protected]

Tester@ ISDC

Page 12: TSM_20_2014_en

12 no. 20/February, 2014 | www.todaysoftmag.com

An overview of Performance Testing on Desktop solutions

downloaded, installed and run on the ser-ver side. It is widely used in Universities or Factories or by staff within an intranet infrastructure. An example can be a stu-dent having a CITRIX client installed on a local PC and running an internal appli-cation on one of the University servers.

3. Client/Server application style using Rich Client and Server is the most used solution on a Desktop platform. This type of architecture can be found mostly on intranet computer networks. This is a 2-tier application that runs in two or more systems having a limited number of users. The connection exists until logout and the application is menu-driven. Think of Microsoft Outlook or any other Desktop e-mail program. The program resides on a local PC and it connects momentarily to the mail server to send and receive mail. Of course, Outlook works offline also, but you cannot com-plete the job without connecting to the server.

ApproachTesting client/server applications requi-

res some additional techniques to handle the effects introduced by the client/server architecture. An example can be: the sepa-ration of computations might improve reliability but this can increase the network traffic and also increase the vulnerability to certain types of security attacks.

Testing client/server systems is defini-tely different - but it’s not the „from another planet” type.

The key to understanding how to test these systems is to understand exactly how and why each type of performance poten-tial problem might arise. With this insight, the testing solution will usually be obvious. We should take a look at how things work and from that, we’ll develop the testing techniques we need.

The testing at the Client side is often

perceived more like functional testing. This happens because the application is desig-ned to handle the requests coming from a single user. It isn’t appropriate to load a desktop application with, let’s say 100 users, in order to test the Server response. In case you could do that, you will test the local machine hardware/software (which doubtlessly will be a bottleneck) and not the server or application overall speed. The client performance testing should be done considering the following risks:

• Impact on user actions - how fast the application handles requests from that user,

• Impact on users system - how light the application is for users system (star-ting from how fast the application opens to memory used for running or other consumed resources)

The Server part of the client-server system is usually designed to be perfor-mance-tested using somehow the same approach as Web testing: record transacti-ons/requests sent to server, and then create multiple virtual-users to repeat those flows/requests. The server must be able to process concurrent requests from different clients.

In various examples of setting up the test environment for Client/Server applica-tions, many teams decided to set multiple workstations (from 2 to 5) as clients, for both functional and performance testing. Each workstation was set to simulate a spe-cific load profile.

ToolsIf we compare the tools available to test

the performance of a Desktop application compared to the Web, the truth is that there is a misbalance, meaning less tools in the first category. And even fewer tools able to cover multiple platforms like: Desktop, Web or Mobile.

During my investigation on this topic, I have come across some interesting tools described in different articles, forums or presentations.

• Apache JMeter1 it is most com-monly used to test backend applications (e.g. servers, databases and services). JMeter does not control GUI elements of Desktop application (e.g. simulate pressing a button or scrolling a page), therefore it is not a good option to test desktop applications from UI layer (e.g. MS Word). JMeter is meant to test the load on systems using multiple thre-ads, or users. Since you’ve got a client

1 http://jmeter.apache.org/

application, it will likely only have one user at a time. It makes more sense to test the database response independent of the Windows application.

• Telerik Test Studio2 runs functi-onal test as performance test, offers in-depth result analysis, historical view and test comparison. Designed for Web and Windows WPF only. No Windows Forms applications supported.

• Infragistic TestAdvantage for Windows Forms3 supports testing on Windows Forms - or WPF-powered application user interface controls.

• WCFStorm4 – it’s a simple, easy-to-use test workbench for WCF Services. It supports all bindings (except webHttp) including netTcpBinding, wsHttpBin-ding and namedPipesBinding to name a few. It also lets you create functional and performance test cases.

Due to time constraints, the following tools were not investigated but might help you test the Desktop applicati-ons: Microsoft Visual Studio, Borland Silk Performer, Seapine Resource Thief, Quotium Qtest Windows Robot (WR), LoginVSI, TestComplete with AQtime, WCF Load Test, Grinder or Load Runner.

In classic client-server systems, the client part is an application that must be installed and used by a human user. That means that, in most cases, the client application is not expected to execute a lot of concurrent jobs but must respond promptly to the user’s actions for the cur-rent task and provide the user with visual information without big delays. The client application performance is usually measu-red using a profiling tool.

T h e Prof i l e r s c ombi n e d w i t h Performance Counters even at SQL level could be a powerful method to find out what happens on the local or server side.

You might consider using the built-in profiler of Visual Studio. It allows you to measure how long a method takes and how many times it’s called. For memory profiling, CLR Profiler allows us to see how much memory the application takes, which objects are being created by which methods.

Most UI test tools could be used to record a script that you played back on a few machines.

Collective findings around performance

2 http://www.telerik.com/automated-testing-tools/3 h t t p : / / w w w. i n f r a g i s t i c s . c o m / p r o d u c t s /

testautomation4 http://www.wcfstorm.com/wcf/home.aspx

testing

Page 13: TSM_20_2014_en

13www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

testsBelow, you have the overview of useful

findings on desktop application perfor-mance as experienced by myself or some other testers:

• Many instances of badly designed SQL were subsequently optimized

• Several statements taking minutes were improved to sub-second

• Severa l incorrect v iews were identified

• Some table indexes that were not set up were also identified and corrected

• Too much system memory consu-med by the desktop application

• Program crashes often occur when repeated use of specific features within the application causes counters or inter-nal array bounds to be exceeded.

• Reduced performance due to exces-sive late binding and inefficient object creation and destruction

• Memory leak identified, when the application was opened and left for a longer period of time (few hours)..

RisksThe most encountered problems relate

to software and the environment. The predominant issue that concerns the per-formance tester is stability, because there are many situations when the tester has to work with software that is imperfect or unfinished.

I will expose here some of the risks, directly related to Desktop applications performance test:

• A quite frequent problem during scripting and running the tests for the first time is related to resource usage on the client side leading to a failure (usually

because of memory running out). Some of the applications often crash when repeated use of specific features within the application causes counters or inter-nal array bound to be exceeded. Of course, those problems will be fixed, but it is an impact on time spent because these scripts have to be postponed until the fix is done.

• Building a performance test data-base involves generating a lot of rows in selected tables. There are two risks invol-ved in this activity:

• The first one is that, in creating the invented data in the database tables, the referential integrity of the database is not maintained.

• The second risk is that the busi-ness rules, for example, reconciliation of financial fields in different tables are not adhered to. In both cases, the load simulation may not be compromised but the application may not be able to handle such inconsistencies and the-refore fails. It is helpful for the person preparing the test database to under-stand the database design, the business rules and the application.

• Underestimation of the effort required to prepare and conduct a performance can lead to problems. Performance testing a Client/Server system is a complex activity, mostly because of the environment and the infrastructure simulation.

• Over ambition, at least early in the project, is common. People involved often assume that databases have to be popu-lated with valid data, every transaction must be incorporated into the load and every response time measured. As usual,

the 80/20 rule applies: 80% of the data-base volume will be taken up by 20% of the system tables. 80% of the system load will be generated by 20% of the system transactions. Only 20% of system transa-ctions need to be measured. Experienced testers would probably assume a 90/10 rule. Inexperienced managers seem to mix up the 90 and the 10.

• Tools to execute automated tests do not require highly specialized skills but, as with most software development and testing activities, there are principles which, if complied with, should allow reasonably competent testers to build a performance test. It is common for managers or testers with no test automa-tion experience to assume that the test process consists of two stages: test scrip-ting and test running. On top of this, the testers may have to build or customize the tools they use.

• When software developers who have designed, coded and functionally tested an application are asked to build an auto-mated test suite for a performance test, their main difficulty is their lack of tes-ting experience. Experienced testers who have no experience of the SUT however, usually need a period to familiarize with the system to be tested.

ConclusionsIn summary, some practical conclusi-

ons can be drawn and applied in your own work:

• Tools are great and essential, but the problem isn’t only about tools. The real challenge is to identify the scope of per-formance test. What are the business, infrastructure or end user concerns?

Page 14: TSM_20_2014_en

14 no. 20/February, 2014 | www.todaysoftmag.com

testing

Among the contractually-bound usage scenarios, identify also most common, business-critical, performance-intensive usage scenarios from technical, stakehol-der or application point of view.

• Usually the risks are traced to infra-structure and architecture, not the user interface. For this reason during the planning and design phase, you have to have a clear overview on concern – test relationship. Don’t waste the time on designing ineffective tests; each test sho-uld solve a specific problem or concern.

• The desktop application performance is very close to test automation and as well as to writing code. There is a slight trend on the internet stating that more and more people develop their own tool for automation/performance using .NET, Java, Python, Perl or other languages.

• It’s difficult to find a tool that can record the majority of the application at UI level and then play-back with mul-tiple users or threads. It seems that the performance focus on Desktop solution is moved more at API / Service Layer.

• For some performance testing (like testing the client side) you don’t need a specific tool, only a well-designed test case set, a group of test suites with some variables and that’s it!

• Factors such as firewalls, anti-virus software, networks, other programs

running etc. all affect the client per-formance, as does operating systems and service packs etc. It’s a complicated set of variables and must be taken into consideration.

• Database, system and network admi-nistrators cannot create their own tests, so they should be intimately involved in the staging of all tests to maximize the value of the testing.

• There are logistical, organizational and technical problems with perfor-mance testing - many issues can be avoided if the principles like the one sha-red below are recognized and followed.

• The approach to testing 2-tier and 3-tier systems is similar, although the architectures differ in their complexity.

• Proprietary test tools help, but improvisation and innovation are often required to make a test ‘happen’.

• Please consider technology when choosing the tool, because some of them can record applications only using WPF (Windows Presentation Foundation) technology like Telerik Test Studio, while others provide support only for Windows Forms.

• Other limitations observed during testing or investigation:

• Many tools require the applica-tion to be already running

• Some of the tools stop the

playback if a pop-up is involved (like QA Wizard Pro)

• Others cannot select „browse” for a file on the local Hard Drive or even select a value from the menu (Open or Save)

• Other challenges that are related to performance testing on desktop solutions could be the multitude of operating systems and versions, hardware configurations or simulating the real environments.

Enjoy testing !!

An overview of Performance Testing on Desktop solutions

Page 15: TSM_20_2014_en

15www.todaysoftmag.com | no. 20/February, 2014

Software Craftsmanship and Lean Startup

The Lean Startup movement was a revolution in the startup world, although it started from a simple observation. The standard way to create a startup was until recently “Build, Measure, Learn”:

• An entrepreneur comes up with a product vision• After obtaining necessary financing, he/she builds the product according to the

vision the product is tested on the market• If it is successful, awesome! If not, it is modified to be closer to the needs of real

users.

The problem of this model is that at the time when the product is tested on the market, it was already built, meaning that money and effort were invested in it. From the technical point of view, the product was often rushed, the programmers worked under pressure and the code is difficult to change and full of bugs.

Steve Blank and Eric Ries came up with the idea to reverse this cycle and use “Learn, Measure, Build” instead:

• An entrepreneur comes up with a product vision. The vision is turned into a set of hypothesis.

• Each hypothesis is tested through an experiment. The result of the experiment is measurable and compared with the expectations defined in the hypothesis.

• Once the hypothesis was validated, the solution can be developed and pac-kaged in a product.

While the experiment can include the development of a working prototype for an application, it is not really necessary. The art is in defining the minimum invest-ment to validate the hypothesis. The most common experiments, especially for online products, are “A/B testing”: two possi-ble implementations are competing and “voted” by potential users. The most popu-lar is usually chosen as the winner.

On technical level, Lean Startup requires high agility due to working in an unknown environment, under discovery. If the standard product development model assumes that the vision, the current and future features are all known, in a lean startup the vision can modify depending on the market (through pivoting), features are discovered through experiments and the next requirements are often surprising.

Flickr is a classic example. Flickr is well known as the image storage service acqui-red by Yahoo. Their initial vision was to build a game called “Game Neverending” that had one feature allowing the player to add and share photos. The team realized after a while that the game was not well received, but the photo sharing feature was. They decided to focus on flickr.com and stop developing the game.

The Lean Startup litterature barely mentions the technical aspects. The authors had two reasons: the revolution had to start from the business side and they assumed programmers will know what technical practices to use.

Imagine though the context of many online startups today. Many of them deploy a new feature hundreds of times a day. Each deployment is either an A/B test or a vali-dated feature.

This development rhythm requires

management

Alexandru [email protected]

Agile Coach and Trainer, with a focus on technical practices@Mozaic Works

Adrian [email protected]

Programmer. Organizational and Technical Trainer and Coach@Mozaic Works

Page 16: TSM_20_2014_en

TODAY SOFTWARE MAGAZINE

16 no. 20/February, 2014 | www.todaysoftmag.com

managementSoftware Craftsmanship and Lean Startup

programming skills such as:• Incremental thinking – splitting a

large feature into very small ones that can be rapidly implemented in order to get fast feedback from the market

• Automated testing – a combination of unit tests, acceptance tests, perfor-mance tests, security tests is necessary to rapidly validate any change in the code

• Design open to change – with inflex-ible design, any change takes too long

• Code easy to modify – following common coding guidelines, optimizing for readability and writing code easy to understand are essential for keeping a high development speed

• Refactoring – the design will inevita-bly be closed to change in certain points. Fast refactoring is an essential skill to turn it into a design easy to change.

In addition to the technical practi-ces, the proficient teams often use clear workflows that commonly include: moni-toring, reversing to previous version when major bugs appear, validation gates before deployment etc.

Not all startups need to deploy hun-dreds of times a day. Sometimes it is enough to have a feedback cycle of under a week. The necessary programming skills are the same. The only difference is that the work can be split in larger increments.

If the link with Software Craftsmanship is not yet clear, let’s explore it some more. As we showed in the article on Software Craftsmanship [http://www.t o d ay s o f t m a g . c o m / a r t i c l e / e n / 1 1 /

Software_Craftsmanship__404], the move-ment has appeared to allow reducing cost of change through applying known techni-cal practices and through discovering new ones. Some of the practices we know and use today are:

• Test Driven Development, to incre-mentally design a solution adapted to the problem and open to change

• Automated testing, to avoid regres-sionsSOLID principles, design patterns and design knowledge to avoid closing the design to change

• Clean Code to write code easy to understand

• Refactoring to bring design back to the necessary level when it is unfit

When should a team working in a startup invest in such practices? The answer should be easy for anyone following the lean startup model: as long as you are beginning to discover the customers through expe-riments, there is no need to invest more time than necessary. The best experiments do not require implementation. If code is required after all, the most important thing is to write it as fast as possible, even though mistakes can be made.

Once a feature was validated, the code needs to be written, or re-written, so that it is easy to modify. The risk is otherwise to benefit too late from the learnings of the experiments.

It is important to mention that deve-lopers who practiced these techniques for long enough can implement faster when using them. This is the Software

Craftsmanship ideal: develop the skills so well so that they become the implicit and the fastest way to build software, especi-ally when the development speed is very important.

ConclusionLean Startup needs Software Craftsmen.

The best experiments require zero code. Sometimes, code is needed. A Software Craftsman is capable to implement them fast and without adding problems.

If, in the discovery phase, practices such as TDD, automated testing or refactoring can be skipped, the implementation phase needs them badly to allow fast deployment of validated features. The fastest they are deployed the highest the chance to have paying customers, ensuring the survival of the company and increasing its chances of success.

Page 17: TSM_20_2014_en

17www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE programming

Getting started with Vagrant

How many times have you heard „But it works on my machine” or „But it works on my local”? How long does it take to setup an environment? How many times have you encountered differences between the production and development environments? Imagine an ideal world where all developers work on the same pre-build platform and the development and production

platform share the same specs. This world exists and it’s called virtualization. Vagrant is a virtualization tool, which has an answer for all these questions, making this ideal world reality. It can be used to create and configure lightweight, reproducible and portable development environments.

Vagrant is written in Ruby by Mitchell Hashimoto (https://github.com/mitchellh). The project started in 2010 as a side-pro-ject, in Mitchell Hashimoto’s free hours. In the next two years Vagrant grew and started to be trusted and used by a range of individuals to teams from large companies. In 2012 Mitchell formed his own company called HashiCorp, in order to develop and to provide professional training and support for Vagrant. Currently Vagrant is an open source project, being the result of hundreds of individuals’ contribution (https://github.com/mitchellh/vagrant).

To achieve its magic, Vagrant stands on the shoulders of his giants, by acting as a layer on top of VirtualBox, VMware, AWS, or other provider. Also the industry-standard provisioning tools such as Shell scripts, Chef or Puppet can be used to auto-matically setup a new environment. Vagrant is usable in projects written in other programming languages such as PHP, Python, Java, C# or JavaScript and can be installed on Linux, Mac OS X, or Windows systems.

Vagrant offers transient boxes that are portable and can move around, with no permanent residence just like a vagrant. If you’re a developer you can use Vagrant to isolate dependen-cies and their configuration with a single disposable consistent environment. Once Vagrantfile is created you just need to run vagrant up command and everything is up and running on your machine. As an operations engineer, Vagrant gives you a dispo-sable environment and consistent workflow for developing and testing infrastructure management scripts. You can quickly test things like shell scripts, Chef cookbooks, Puppet modules, and more, using local virtualization such as VirtualBox or VMware. Then, with the same configuration, you can test these scripts on remote clouds such as AWS or RackSpace with the same workflow. As a designer, by using Vagrant, you can simply setup your environment base on the Vagrantfile, which is already configured, without worrying about how to get the app running again.

By using Vagrant you can achieve the following:• environment per project - you can have different configu-

ration files for each project• same configuration file for developing, pre-staging,

staging and production environments• easy to define and transport the configuration file

(Vagrantfile)• easy to tear down, provisional: infrastructure as a code• version able configuration files - you can commit all your

cookbooks and the Vagrantfile• shared across the team - by using the same configuration

file (Vagrantfile)

Before diving into the first vagrant project, you need to install VirtualBox or any other supported provider. For this example I used VirtualBox. The next step is to install Vagrant. For this step you have to download and install the appropriate package or installer from Vagrant’s download page (http://www.vagrantup.com/downloads). The installer will automatically add vagrant to the system path, so it will be available in terminals as shown below.

$ vagrantUsage: vagrant [-v] [-h] command [<args>] -v, --version Print the ver-sion and exit. -h, --help Print this help.Available subcommands: box manages boxes: installation, re-moval, etc. destroy stops and deletes all traces of the vagrant machine halt stops the vagrant machine help shows the help for a subcommand init initializes a new Vagrant envi-ronment by creating a Vagrantfile package packages a running vagrant envi-ronment into a box plugin manages plugin s: install, unin-stall, update, etc. provision provisions the vagrant machine reload restarts vagrant machine, loads new Vagrantfile configuration resume resume a suspended vagrant ma-chine ssh connects to machine via SSH ssh-config outputs OpenSSH valid configuration to connect to the machine status outputs status of the vagrant machine suspend suspends the machine

Page 18: TSM_20_2014_en

18 no. 20/February, 2014 | www.todaysoftmag.com

programmingGetting started with Vagrant

up starts and provisions the vagrant environment

The next step is to initialize vagrant on your project by running vagrant init command line:

$ vagrant init

A `Vagrantfile` has been placed in this directory. You are nowready to `vagrant up` your first virtual environment! Please Read the comments in the Vagrantfile as well as docu-mentation on`vagrantup.com` for more information on using Vagrant.

After running this command, a new file called Vagrantfile is generated into your project folder. Vagrantfile is written in Ruby, but knowledge of the Ruby programming language is not necessary to make modifications, since it is mostly simple varia-ble assignment. The Vagrantfile has the following roles:

• Select base box• Choose virtualization provider• Configure VM parameters• Configure Networking• Tweak SSH settings• Mount local folders• Provision machine

Select base boxThe automatically generated Vagrantfile, contains the

following lines:

# Every Vagrant virtual environment requires a box to build off of. config.vm.box = “precise32” # The url from where the ‘config.vm.box’ box will be fetched if it # doesn’t already exist on the user’s system. config.vm.box_url = “http://files.vagrantup.com/pre-cise32.box”

config.vm.box line describes the machine’s type required for the project. The box is actually a skeleton from which Vagrant machines are constructed. Boxes are portable files which can be used by anyone on any platform that runs Vagrant. Boxes are rela-ted to the providers, so when choosing a base box you have to be aware of the supported provider. In order to choose a base box,

you can access http://www.vagrantbox.es/ web site, where you can find a list of available boxes. Vagrant offers also the possibility of creating custom boxes. A nice tool which can be used to create custom boxes can be found here: https://github.com/jedi4ever/veewee.

There are two ways to add a base box. One option is to define the base box in the Vagrantfile config.vm.box, and when you run vagrant up command, the box will be added. The second option is to execute the command bellow:

$ vagrant box add <name> <url>

<name> parameter can be anything you want to, just make sure it is the same value defined in the directive config.vm.box, form Vagrantfile.

<url> is the location of the box. This can be a path to your local file system or an HTTP URL to the box remotely.

$ vagrant box add precise32 http://files.vagrantup.com/precise32.box

Available commands for boxes are described below:

$ vagrant box -hUsage: vagrant box <command> [<args>]Available subcommands: add list remove repackageFor help on any individual command run `vagrant box COMMAND -h`

Choose virtualization providerThere are two ways to specify the provider, similar to those

described for the base box. The first option is to specify the provider from the command

line as a parameter. If you choose this solution you have to make sure that the argument from command line matches Vagrantfile’s directive config.vm.provider.$ vagrant up --provider=virtualbox

There are two ways to specify the provider, similar to those described for the base box. The first option is to specify the pro-vider from the command line as a parameter. If you choose this

www.3pillarglobal.com

3Pillar Global, a product development partner creating software that accelerates speed to market in a content rich world, increasingly connected world.

Our core competencies include:

ProductStrategy

ProductDevelopment

ProductSupport

Our offerings are business focused, they drive real, tangible value.

Page 19: TSM_20_2014_en

19www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

solution you have to make sure that the argument from command line matches Vagrantfile’s directive config.vm.provider.

Configure VM parametersVagrantfile offers the possibility to configure the providers

by adding the vb.customize directive. For example if you want to increase the memory, you can do as shown below.

vb.customize [„modifyvm”, :id, „--memory”, „1024”]

# Provider-specific configuration so you can fine-tune various # backing providers for Vagrant. These expose provider-specific options. # Example for VirtualBox: # config.vm.provider :virtualbox do |vb| # # Don’t boot with headless mode # vb.gui = true # # # Use VBoxManage to customize the VM. For exam-ple to change memory: vb.customize [“modifyvm”, :id, “--memory”, “2048”]

end

Configure NetworkingAccessing the web pages from the guest machine is not such a

good idea, so Vagrant offers networking features in order to access the guest machine from our host machine. Vagrantfile has three directives, which can be used in order to configure the network.

• config.vm.network :forwarded_port, guest: 80,

host: 8080

Translating this directive, means that Apache from our guest machine, created by Vagrant, can be accessed on our host machine by using the following url http://127.0.0.1:8080. This actually means that we have all the network traffic forwarded to a specific port on the guest machine (8080).

# Create a forwarded port mapping which allows ac-cess to a specific port # within the machine from a port on the host ma-chine. In the example below, # accessing “localhost:8080” will access port 80 on the guest machine. config.vm.network :forwarded_port, guest: 80, host: 8080

• config.vm.network :public_network

# Create a public network, which generally matched to bridged network. # Bridged networks make the machine appear as an-other physical device on # your network. config.vm.network :public_network

• config.vm.network :private_network, ip: “192.168.33.10”

# Create a private network, which allows host-only access to the machine # using a specific IP. config.vm.network :private_network, ip: “192.168.33.10”

Tweak SSH settingsHVagrantfile offers also the possibility to configure the config.

ssh namespace, in order to specify username, host, port, guest_port, private_key_path, forward_agent, forward_x11 and shell.

Vagrant.configure(“2”) do |config| config.ssh.private_key_path = “~/.ssh/id_rsa” config.ssh.forward_agent = true

end

Mount local foldersWhile many people edit files from virtual machines just using

plain terminal-based editors over SSH, Vagrant offers the possibi-lity to automatically sync files on both guest and host machines, by using synced folders. By default Vagrant offers the possibility to share your project folders to the /vagrant directory on the guest machine. So the /vagrant directory that can be seen on the guest machine, is actually the same directory that you have on your host machine. So you won’t have to use anymore the Upload and Download option from your IDE in order to sync files on the host and the guest machines. If you want to change the synced directory on the guest machine, you can add the directive config.vm.synced_folder „../data”, „/vagrant_data” in the Vagrantfile.

Vagrant.configure(“2”) do |config| config.vm.synced_folder “../data”, “/vagrant_data”

end

Provision MachineProvisioning isn’t a matter that developers should care about,

as usually sysadmins handle it. The idea is to record somehow the software and the configurations made on the server, in order to be able to replicate it on other servers. In the old days sysadmins kept a wiki of the executed commands, but this was a terrible idea. Another option was to create .box or .iso backups, so new ser-vers can be configured based on those files. But maintaining these backups up to date requires a lot of work and as the time goes by, it’s quite hard to keep synced all the machines. Provisioning in our days, offers the possibility to add specific software, to cre-ate configuration files, to execute commands, manage services or create users, by using modern provisioning systems. Vagrant can integrate the following provisioning systems: Shell, Ansible, Chef Solo, Chef Client, Puppet, Salt Stack. The two most popular provisioning systems are Chef and Puppet, being supported by large communities. Both are written in Ruby, having similar fea-tures like modularized components, packages for software installs or templates for custom files. As a notice, both systems are open source projects with enterprise revenue model.

Provisioning with ShellProvisioning with Shell in Vagrant is quite easy. There are three

ways to do it. You can write inline command, or you can specify the path to the shell script. The path can be either from an internal or external folder.

Inline command config.vm.provision :shell, :inline => “curl -L htt-ps://get.rvm.io | bash -s stable”

Internal pathconfig.vm.provision :shell, :path => “install-rvm.sh”, :args => “stable”

External pathconfig.vm.provision :shell, :path=>”https://example.com/install-rvm.sh”, :args => “stable”

Provisioning with PupppetPuppet modules can be downloaded from https://github.com/

puppetlabs. In order to configure Vagrant with Puppet, you have to setup the Puppet directives as follows:

Page 20: TSM_20_2014_en

20 no. 20/February, 2014 | www.todaysoftmag.com

config.vm.provision :puppet do |puppet| puppet.manifests_path = “./tools/puppet/mani-fests/” puppet.module_path = “./tools/puppet/modules” puppet.manifest_file = “init.pp” puppet.options = [‘--verbose’]End

init.pp

include mysql::serverclass { ‘::mysql::server’: root_password => ‘strongpassword’

}

class mysql::server ( $config_file = $mysql::params::config_file, $manage_config_file = $mysql::params::manage_con-fig_file, $package_ensure = mysql::params::server_pack-age_ensure,

)

Mysql params.pp:

class mysql::params { $manage_config_file = true $old_root_password = ‘’ $root_password = “strong-password”

}

Mysql Template:

Mysql Template[client]password=<%= scope.lookupvar(‘mysql::root_pass-

word’) %>

Provisioning with Chef SoloChef Solo cookbooks can be downloaded from here: https://

github.com/opscode-cookbooks.For Chef Solo Vagrantfile has to be edited as shown below, in

order to configure the path to the cookbooks. config.vm.provision :chef_solo do |chef| chef.cookbooks_path = “cookbooks” chef.add_recipe “vagrant_main” # chef.roles_path = “../my-recipes/roles”

# chef.data_bags_path = “../my- recipes/data_bags” # chef.add_role “web” # # # You may also specify custom JSON attributes: # chef.json = { :mysql_password => “foo” } end

vagrant_main/recipes/default.rb

include_recipe “apache2”include_recipe “apache2::mod_rewrite”

package “mysql-server” do package_name value_for_platform(“default” => “mysql-server”) action :install end

vagrant_main/templates/default.rb

NameVirtualHost *:80<VirtualHost *:80> ServerAdmin webmaster@localhost ServerName cfratila.tsm.com ServerAlias www.cfratila.tsm.com

DocumentRoot /var/www

<Directory “/var/www/sites/all/cfratila.tsm.com”> Options Indexes Fol-lowSymLinks MultiViews AllowOverride All Order allow,deny Allow from all </Directory>

ErrorLog <%= node[:apache][:log_dir] %>/tsm-error.log CustomLog <%= node[:apache][:log_dir] %>/tsm-access.log combined LogLevel warn</VirtualHost>

If you’re still irresolute and you don’t know which provisioner to choose, you could have a look at Table 1 for a helping hand.

Once you are done with the Vagrantfile configuration, you are ready to create the virtual machine. For this step you should open your command line interface and navigate to the project’s folder, where the Vagrantfile should be placed also, in order to sync the folders. Then just type vagrant up and your guest machine will be created. First time, when you’ll run vagrant up, it will take a while, because Vagrant will download the configured box. In our case, I didn’t add the virtual box with vagrant box add <name> <url> command, so the box will be added at vagrant up command line, as shown below.

D:\projects\tsm> vagrant upBringing machine ‘default’ up with ‘virtualbox’ pro-vider...[default] Box ‘precise32’ was not found. Fetching box from specified URL for the provider ‘virtual-box’. Note that if the URL does not have a box for this provider, you should interrupt Vagramt now and add the box yourself. Otherwise Vagrant will attempt to download the full box prior to discovering this error.Downloading box from URL: http://files.vagrantup.com/precise32.boxProgres: 1% <Rate: 41933/s, Estimated time remain-

Table 1. Chef vs. Puppet

Fig. 1 Puppet Module’s Structure

Fig 2. Cookbook files structure

Getting started with Vagrant programming

Page 21: TSM_20_2014_en

21www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

ing: 2:15:30>

After the guest machine was created, you can simply type vagrant ssh in order to access it. For Windows systems you can install PuTTY SSH client if you want to, by adding the authenti-cation information as shown below:

D:\projects\tsm> vagrant ssh‘ssh’ executable not found in any directories in the %PATH% variable. Is an SSH client instaled? Try in-stalling Cygwin, MinGW or Git, all of which contain SSH client. Or use the PuTTY SSH client with the following authentication information shown below:

Host: 127.0.0.1Port: 2222Username: vagrantPrivate key: C:/Users/Carmen/.vagrant.d/insecure_private_key

If you have modified only the provisioning scripts and want to quickly test, you can just run vagrant provision or vagrant --pro-vision-with x,y,z, where x,y,z represents the provisioner :shell or :chef_solo.

If you want to save the state of the machine rather than doing a full boot every time, you can run vagrant suspend. With vagrant resume command you can resume a Vagrant machine that was suspended.

If you want to shut down the machine you should use vagrant halt command. By using this command you can save space, but it will take longer time to restart the machine because of booting. With vagrant up, you’ll have your machine running again. If you want to run halt and up, because you just did a modification to the Vagrantfile, for example, you can quickly run vagrant reload.

If you want to stop the machine and to remove all the allocated resources, you can do it by typing vagrant destroy.

Vagrant vs. DockerDocker is an open source project to pack, ship and run any

application as a lightweight container. The main idea is to create components per application. The component is actually a snap-shot of the application. After making changes in the component, you can commit the new state of the snapshot, so rolling back to a previous state is quite easy. This project is awesome because it doesn’t involve virtual machines as Vagrant does, which means that startup time and resources usage is better. Also you can for-get about cookbooks if you don’t want to use Chef anymore. The interactive command line tutorial (http://www.docker.io/getting-started/) is also very intuitive and in less than 20 minutes you can have an overview image of what Docker is.

By comparing the workflow we can enunciate the following points:

1. Docker is better on the provisioning side. 2. Rolling back is easier with Docker because of the snapshot

system3. Docker raised a new deployment model4. Docker is supported only on Ubuntu linux machines5. Docker is not recommend on production since it is still in

the development phase6. Vagrant is better because it keeps source code and deploy-

ment information in the same place7. Vagrant is better because it is stable and can be used in

production8. Vagrant is better because it can be integrated with Linux,

Windows and Mac OS X systems

Fig 4. Vagrant workflow

Carmen Frățilă[email protected]

Software engineer@ 3Pillar Global

Page 22: TSM_20_2014_en

22 no. 20/February | www.todaysoftmag.com

Sorina [email protected]

Marketing manager@ Fortech

management

Startup marketing: challenges and

practical ideas

Budget, team and limited resources in general, anonymity and the need to create awareness, sometimes the need to educate the market and the need to generate sales opportunities – these are just some of the challenges faced by startup busi-

nesses. In this context, the process and the marketing approach applied by startups has at least some particularities often addressed in the marketing literature and certainly „lived” by many organizations in the early stages of their existence.

First of all, perhaps more than other processes, marketing in startups is innova-tive. Limited resources put managers and marketing specialists (if there are dedicated people!) in a position where they need to find solutions to these shortcomings, often unconventional ones. It is often said, that desperation leads to innovation. Exactly for this reason, the term of „growth hacking” – introduced by Sean Ellis and centered on creativity and the use of unconventional methods to achieve rapid and spectacular growth – is very popular in the context of startups in the technology area. Sure it does not involve reinventing the wheel, but rather the use of already popular concepts and practices (such as those in the area of content marketing or community mar-keting), but in a unique way that attracts attention and facilitates rapid dissemina-tion. Products such as Dropbox, LinkedIn and YouTube are often given as examples of growth hacking.

Furthermore, the approach is short term. This is rather natural: the business has no history yet, estimates are hard to define and, in addition, the aim is, to a certain extent, survival. Unfortunately sometimes, and especially in less mature economies such as our country, factors

from the external environment, especially the macro-environment (unstable legisla-tion, socio-economic situation etc.), also contribute to this.

Finally, marketing is strongly influen-ced by the personality of the manager who usually is also the business owner (entre-preneur). Often, the organizational culture of a young or small company revolves aro-und the entrepreneur and adopts defining characteristics of this person’s personality. Essentially, it is not a bad aspect, but situati-ons may occur, where too much reliance on the entrepreneur can cause shortcomings. A simple example is blockage on the deci-sion-making level; there may be situations where no decisions are taken in the absence of the manager - entrepreneur, and nothing is done, while agility should be one of a startup’s strengths.

Given these challenges, marketing strategy (the segmentation - targeting - positioning process and the entire mar-keting mix: product, price, promotion, distribution) in a startup consists most often of identifying a set of priorities, taking decisions and executing them. Some references are essential in this context:

First of all, it’s necessary to provide answers to a series of questions aimed to

Page 23: TSM_20_2014_en

23www.todaysoftmag.com | no. 20/February, 2014

define the positioning the product: What consumer segments does it address? What needs or problems does it solve for these consumers? What makes it different from other similar products that are already on the market? Knowing the future customer in as much detail as possible is an essential aspect, since it may provide the basis for important decisions, including decisions in the product conception and develop-ment phase (with respect to functionalities, characteristics and so on). In this context, marketers usually resort to creating a very comprehensive profile of the user (i.e. user persona) which includes demographical, social and economic characteristics such as age, occupation, income, habits, interests and so on.

Once the target consumer, the need and the competitive advantage of the product are identified, this advantage has to be communicated. This is when bran-ding comes in, a process which, at a basic level includes the choosing of a name and the articulation of a mission and vision statement, as well as some key value propo-sitions. Especially when dealing with B2C software products that are addressed to individual consumers, it’s important for the name to be easily identifiable and memora-ble. The key messages, in order to actually reach the consumers and make them aware of the product, have to articulate the bene-fits, the solutions to problems they are confronted with, rather than objective cha-racteristics of the product. Also, as early as possible, intellectual property protection issues have to be considered: brand regis-tration, web domain acquisition, “parking” of the pages on social networking platforms and so on.

Then follows the most interesting and challenging part for a marketing professio-nal in a startup: to generate demand for the product and create opportunities for sales. The activities that define this stage can be grouped into the following two categories:

Outbound marketing, which inclu-des activities such as advertising, email marketing and telemarketing, fairs and exhibitions attending etc. – generally, acti-vities that involve an outreach to the target consumers.

Inbound marketing includes marketing efforts which, on the contrary, contribute to the consumer finding the product or the company and not the other way around. In the area of digital marketing, inbound mar-keting activities may involve developing a website that attracts visitors naturally, through search engine optimization, social media, blogging, press etc. Inbound mar-keting efforts require time, but are generally lees costly than advertising (as they may only require creativity and editorial capa-bilities) and, very importantly, once built, they produce results and long-term impact.

All these aspects may not only make the difference between survival or failure for a startup, but can also offer premises for growth and expansion. And this is where growth hacking comes in, especially when the startup aims to attract finan-cing for further development. Investors are interested in success stories, not just ideas with potential and, most importantly, they want healthy businesses. This is why growth hacking has to be approached with caution and care, as figures are important, but they must have a solid basis, meaning a quality product, which delights and does not disappoint the users. Having the right

experience with a product may determine users to become the main engine for busi-ness growth, an ideal situation for a startup with great plans.

Young spiritMature organizationA shared vision

Join our journey!

www.fortech.ro

Page 24: TSM_20_2014_en

24 no. 20/February | www.todaysoftmag.com

Daniel [email protected]

Member in Board of Directors@ Cluj IT Cluster

management

Cluj IT Cluster on Entrepreneurship

The question was – of course - always answered, though we always felt that in a certain measure the given answer was not enough, neither for the ones who asked, nor for us, the members of the Cluster.

This comes from the fact that we have assumed an ambitious role in supporting the evolution of the Romanian IT industry, we have created expectations that we have to fulfil, and we are aware that we will be evaluated by the objective results we deliver.

Cluj IT Cluster came into being following a direct need, from the understanding that the way Romanian IT industry has grown and worked for many years may not be sustainable in the long term. There are over 8,000 IT professionals in Cluj, an impressive concentration if compared to the population of the city. We got to this number by an organic and relatively fast growth that has relied primarily on the low cost of the labour force, and afterwards on a cost that is competitive for the qualification and quality of our programmers. Still, relying on the cost of the labour force as the main growth factor is not just a dangerous strategy, but a suicidal one, as the continuously growing cost of production in Romania tends to diminish the advantage we have compared to our traditional markets (Western Europe and the U.S.).

It is not a secret for anyone that the only

way to ensure growth in the long run is to create additional and sustainable value, and the healthiest way to achieve this is by developing and capitalizing innovative products.

How to develop and capitalize innovative products in Romania? Outside an answer to this key question, outside outlined solutions, the answers to all other questions are fragile. Innovation is not a one-time act, it is a process. This process should be developed and carried out with lots of maturity and wisdom.

Indeed, the preparation for this takes time, it takes more time than any of us would want, because in itself it is a learning process, and changing the direction from outsourcing to innovation means, and it requires a change of culture.

During the first year in the life of the Cluster we have looked for different ways to build the cornerstone of this change.

We say that it is a change of culture, because we envision moving from a fragmented IT environment, with a lot of mistrust, into an environment based on cooperation between companies, universities and public institutions. This is about changing the way that fundamental research gets to serve pragmatic and lucrative purposes, through technological transfer, instead of being confined to the narrow circle of researchers. We are talking about the change in our corporate culture,

From the very first day the Cluj IT Cluster was created, we were confronted with the question of how we plan to support the local entrepreneurship. The question con-stantly accompanied us all the time in over one year since the Cluster was founded,

at all conferences and events attended by our representatives.

Page 25: TSM_20_2014_en

25www.todaysoftmag.com | no. 20/February, 2014

as our companies do not consist of “resources” hired by the hour, but are made of valuable people who can and want, through their initiatives, to contribute to the success of their companies; they are made of people who deserve to benefit from this success. We are talking about a cultural change from rigid systems, reluctant to risk, where each aspect is strictly regulated, towards dynamic and adaptive systems where courageous initiatives are supported and can flourish organically. All these changes required, and still require time. But now, after over a year of working together in the Cluster, even if less visible from the outside, we know that we are on the right track and that we can assume with great boldness the next steps.

Currently in Romania and especially in Cluj, we are in a period of great effervescence of the start-up culture in IT. There is access to information; there are lots of inspirational examples in the more developed markets, we begin to have people and organisations that coagulate this movement through events, meeting centres and co-work centres, and even accelerators. We also know that there is money that can be reached, it is not impossible to get it once we are able to support a value proposition and we can convince that we have the ability and the maturity to put our ideas into execution. We know that innovative products are built and supported by entrepreneurs; therefore it is essential to have them and to support them.

Now we have enough organizational maturity to be able to support such a process in a relevant way.

Cluj IT aims to be a catalyst for the entrepreneurial environment in Cluj and Romania.

We aim to be a coagulation factor and to ease up the communication processes within the ecosystem of start-ups, thus creating an environment where it can grow in an organic, collaborative way and not fragmented or circumstantially. We intend to foster the cooperation between start-ups and mature businesses, academia and the public administration. We aim to develop a durable framework for entrepreneurial education, addressed not just to the start-ups but also to the mature organizations dealing with the challenge of reinventing themselves, the challenge of discovering the potential of their employees and to support the internal entrepreneurship (the term intrapreneurship was already created for this).

Between the mature business environment and the start-ups there is still a gap that we need to bridge in order to create a healthy ecosystem. The mature companies benefit from the availability of a good level of resources, both human and financial, they also have operational experience and relevant partnerships, they understand the business verticals and have a good knowledge of the external markets mechanisms, where they operate. However, many of them have rigid structures and processes, there is a great reluctance to risk and the intrapreneurial initiative is not encouraged. On the other hand, the young entrepreneurs don’t have enough resources and operational experience, they have poor knowledge over the business segments that they would like to serve, and they don’t have enough knowledge of the specific geographic markets in which their ideas could turn into success. Enthusiasm is essential, but it does not compensate for the lack of experience and the unavoidable difficulties in the process of entrepreneurial self -education may become discouraging. Those two sides need each other; without filling this gap the Romanian IT environment might evolve through struggles rather than attaining its full potential. Therefore it is very important for us to address this issue and to create a relevant dialogue between the

two sides, to encourage the development of partnerships.Also, the exchange of information and knowledge between the

IT companies is essential, especially concerning methodologies and best practices, as well as specific geographic markets.

We aim to facilitate the access to relevant consultancy from mature markets, the access to operational or financial partners, and the direct funding, especially to venture capital.

Concerning the relationship with the academic environment, a permanent brokerage program is prepared in order to facilitate the cooperation in research, to align the research with the market needs and to foster the technological transfer in order to build innovative products.

So, we do have a strategy. You will of course ask if we do have a plan. Indeed we have it, but it matters less to declare it and it matters much more to prove it. Because, as many investors and entrepreneurs say, the ideas are valuable, but the execution is what matters most..

Page 26: TSM_20_2014_en

26 no. 20/February | www.todaysoftmag.com

interview

Interview with Radu Georgescu

Ovidiu Mățan: Hello, Radu! You are one of the most famous entrepreneurs from Romania! Can you tell us, for the readers of Today Software Magazine, how it all started? Where did you start from?

Radu Georgescu: It is history; I started in 1992, I graduated from the faculty, sold my first programs, after which I started others, some of them failed.

The first applications were antiviruses?I started by writing an application over

Autcad/Autodesk; after that, I continued with other three products, all three have failed, the antivirus was sold to Microsoft. After that, I set up other companies, some of them failed, Gecad ePayments was sold to Napsters.

If we are talking about Gecad and the famous selling to Microsoft, can you tell us whether Microsoft was attracted by the tech-nical aspect of the application or there was a marketing part towards them?

Microsoft was interested only in the technical aspect. Microsoft purchased the technology and the technical staff that came along with the technology.

The technical staff is still working for Microsoft?

Absolutely, only that not f rom Bucharest, but from Redmond. Practically, the company built the technology for Microsoft, and Microsoft made an offer to the respective people to move to Redmond and they have all moved there and are still there. They are the main part of the team that is developing the security for Microsoft today.

Related to the last Avangate success, can you tell us a few words about it? How long did it take to develop it?

The company started in 2006, seven years ago, growing by 70% each year.

Were the clients Romanian?The company was multinational, with

headquarters in the United States, offices in Romania, Holland, China, Russia and Cambodia.

The startups are becoming fashionable in Romania; how do you see their evolution in the future? Can we expect them to outgrow the outsourcing at a certain point?

I am also a vice-president of ANIS, and Andrei Pitis is the president, and together with him we have set a goal in persuading the outsourcing companies to also build within a small part of product. I think the outsourcing has the following problem: it is

We are begining to publish a series of interviews from How To Web 2013, most important event dedicated to innovation and entrepreneurship from Eastern Europe. Radu Georgescu is a well known IT entreneur from România as he

built several products: RAV Antivirus sold to Microsoft, Gecad ePayment sold to Napster, and the last big transaction have been the selling of Avangate.

Ovidiu Măţan, [email protected]

Editor-in-chief Today Software Magazine

Page 27: TSM_20_2014_en

27www.todaysoftmag.com | no. 20/February, 2014

generally dependent on one or two clients, it is a cheap selling of minute-person, uns-calable, while the product is exponentially scalable, independent of the provider. We are trying to develop this trend of migra-tion from outsourcing towards product and I hope this thing will happen soon. We can already see it happening slowly. ANIS has even done a research which was released two weeks ago.

Which do you think will be the domains of interest, with a potential in the future?

I attended Robin’s presentation this morning, which was exceptional and there is a lot of truth in it. I don’t know, I am not a visionary, I cannot answer to this question.

Radu Georgescu: Now let me ask you a question. Why don’t you write your maga-zine in English?

Ovidiu Mățan: But it does come in English, too. Usually, the one in English comes out a month later.

R a d u G e o r g e s c u : E x c e l l e n t ! Congratulations! A software magazine written in Romania! And do you also have readers abroad?

OM: Yes, but most of them are readers from Romania – a few thousands – and a few hundreds from abroad. The last part is technical and the first part is about events.

RG: It would be nice for you to turn it into an international magazine, such as TechCrunch and The Next Web. It would be awesome for you to do something like

this! OM: What is your opinion on Google

Glasses and how will the technology progress in the future?

RG: I don’t know if Google Glasses will be the winner, but it is obvious that what the Americans call whearables will be in our lives. It may be the glasses from Google or from elsewhere, it may be the watches, or the shoes, or the phones or the head-bands, I have no idea, but I strongly believe that something will be.

What do you think of the Romanian startups? There are hardly any successful startups.

Oh, yes, there are! There is Oxygen XML, which is the coolest XML editing tool in the world, an extraordinary busi-ness. Avangate didn’t seem spectacular, either … Why is it that all good com-panies are the ones established by Radu Georgescu and Florin Talpes?! There are so many thousands of extraordinary entrepre-neurs. Let’s judge a company by what it is and not by the person behind it. Oxygen XML, without having any connection to it, is selling because it is very good, isn’t that spectacular enough? Could they have done an even better job and promote it? Yes, but that product is an extraordinary one.

Which is the ingredient that maybe the Romanian startups are lacking?

It is being built. Time is what’s lacking. Let’s find successful examples. People are

building, people are failing; and we need to learn from the experience of my failures, of your failures, or those of other people and start to try more and more. The angel investment infrastructure is being built, the VC infrastructure is being built, there are events; everything is being built and in time will be. Think of MavenHut, Ubervu, Softpedia, think of Emi Gal. They are all successful examples.

Is there a recipe for success?What I am trying to do together with

Andrei is to persuade outsourcing compa-nies, which are the ones that are capable of doing this, to build their own small pro-ducts to takeover.

A piece of advice for the young people who wish to set a startup?

I do not give advice, I am not in the position to give advice. One cannot give general advice from some wise men.

Page 28: TSM_20_2014_en

28 no. 20/February, 2014 | www.todaysoftmag.com

startups

Deep learning had its first major success in 2006, when Geoffrey Hinton and Ruslan Salakhutdinov published the paper “Reducing the Dimensionality of Data with Neural Networks”, which was the first efficient and fast application of Restricted Boltzmann Machines (or RBMs).

As the name suggests, RBMs are a type of Boltzmann machi-nes, with some constraints. These have been proposed by Geoffrey Hinton and Terry Sejnowski in 1985 and they were the first neu-ral networks that could learn internal representations (models) of the input data and then use this representation to solve different problems (such as completing images with missing parts). They weren’t used for a long time because, without any constraints, the learning algorithm for the internal representation was very inefficient.

According to the definition, Boltzmann machines are gene-rative stochastic recurrent neural networks. The stochastic part means that they have a probabilistic element to them and that the neurons that make up the network are not fired deterministically, but with a certain probability, determined by their inputs. The fact that they are generative means that they learn the joint probability of input data, which can then be used to generate new data, similar to the original one.

But there is an alternative way to interpret Boltzmann machi-nes, as being energy based graphical models. This means that for each possible input we associate a number, called the energy of the model, and for the combinations that we have in our data we want this energy to be as low as possible, while for other, unlikely data, it should be high.

The graphical model for an RBM with 4 input units and 3

hidden unitsThe constraint imposed by RBMs is that neurons must form a

bipartite graph, which in practice is done by organizing them into two separate layers, a visible one and a hidden one, and the neu-rons from each layer have connections to the neurons in the other layer and not to any neuron in the same layer. In the above figure, you can see that there are no connections between any of the h’s,

nor any of the v’s, only between every v with every h. The hidden layer of the RBM can be thought to be made of

latent factors that determine the input layer. If, for example, we analyze the grades users give to some movies, the input data will be the grades given by a certain user to the movies, and the hidden layer will correspond to the categories of movies. These categories are not predefined, but the RBM determines them while building its internal model, grouping the movies in such a way that the total energy is minimized. If the input data are pixels, then the hidden layer can be seen as features of objects that could generate those pixels (such as edges, corners, straight lines and other differenti-ating traits).

If we regard the RBMs as energy based models, we can use the mathematical apparatus used by statistical physics to estimate the probability distributions and then to make predictions. Actually, the Boltzmann distribution from modeling the atoms in a gas gave the name to these neural networks.

The energy of such a model, given the vector v (the input layer), the vector h (the hidden layer), the matrix W (the weights associated with the connections between each neuron from the input layer and the hidden one) and the vectors a and b (which represent the activations thresholds for each neuron, from the input layer and from the hidden layer) can be computed using the following formula:

The formula is nothing to be scared of, it’s just a couple of matrix additions and multiplications.

Once we have the energy for a state, its probability is given by:

where Z is a normalization factor. And this is where the constraints from the RBM help us.

Because the neurons from the visible layer are not connected to each other, it means that for a given value of the hidden layer neu-ron, the visible ones are conditionally independent of each other. Using this we can easily get the probability for some input data, given the hidden layer:

Restricted Boltzmann Machines

In the last article I presented a short history of deep learning and I listed some of the main techniques that are used. Now I’m going to present the components of a deep learning system.

programming

Model grafic pentru un RBM cu 4 unități de intrare și 3 unități ascunse.

Page 29: TSM_20_2014_en

29www.todaysoftmag.com | no. 20/February, 2014

w h e r e is the activation probability for a single neuron:

is the logistic function.

In a similar way we can define the probability for the hidden layer, having the visible layer fixed.

How does it help us if we know these probabilities?

Let’s presume that we know the correct values for the weights and the thresholds of an RBM and that we want to determine what items are in an image. We set the pixels of the image as the input of the RBM and we calculate the activation probabilities of the hidden layer. We can interpret these probabilities as filters learned by the RBM about the possible objects in the images.

We take the values of those probabilities and we enter them into another RBM as input data. This RBM will also give out some other probabilities for its hidden layer, and these probabilities are also filters for its own inputs. These filters will be of a higher level and more complex. We repeat this a couple of times, we stack the resulting RBMs and, on top of the last one, we add a classifica-tion layer (such as logistic regression) and we get ourselves a Deep Belief Network.

Greedy layerwise training of a DBN

The idea that started the deep learning revolution was this: you can learn layer by layer filters that get more and more complex and at the end you don’t work directly with pixels, but with high level features, that are much better indicators of what objects are there in an image.

The learning of the parameters of a RBM is done using an algo-rithm called “contrastive divergence”. This starts with an example from the input data, calculates the values for the hidden layer and then these values are used to simulate what input data they would produce. The weights are then adjusted with the difference between the original input data and the “dreamed” input data (with some inner products around there). This process is repeated for each example of the input data, several times, until either the error is small enough or a predetermined number of iterations have passed.

There are many implementations of RBMs in machine lear-ning libraries. One such library is scikit-learn, a Python library used by companies such as Evernote and Spotify for their note classifications and music recommendation engines. The following code shows how easy it is to train an RBM on images that each contain one digit or one letter and then to visualize the learned filters.

from sklearn.neural_network import BernoulliRBM as RBM import numpy as np import matplotlib.pyplot as plt import cPickle X,y = cPickle.load(open(„letters.pkl”)) X = (X - np.min(X, 0)) / (np.max(X, 0) + 0.0001) # 0-1 scaling rbm = RBM(n_components=900, learning_rate=0.05, batch_size=100, n_iter=50) print(„Init rbm”) rbm.fit(X) plt.figure(figsize=(10.2, 10)) for i, comp in enumerate(rbm.components_): plt.subplot(30, 30, i + 1) plt.imshow(comp.reshape((20, 20)), cmap=plt.cm.gray_r, interpolation=’nearest’) plt.xticks(()) plt.yticks(()) plt.suptitle(‚900 components extracted by RBM’, fontsize=16)

plt.show()

Some of the filters learned by the RBM: you can notice fil-

ters for the letters B, R, S, for the digits 0, 8, 7 and some others

Some of the filters learned by the RBM: you can notice filters for the letters B, R, S, for the digits 0, 8, 7 and some others

RBMs are an essential component from which deep learning started and are one of the few models that allow us to learn effici-ently an internal representation of the problem we want to solve. In the next article, we will see another approach in learning repre-sentations, using autoencoders.

startups

Roland [email protected]

Junior Python Developer@ 3 Pillar Global

Page 30: TSM_20_2014_en
Page 31: TSM_20_2014_en

31www.todaysoftmag.com | no. 20/February, 2014

programming

Multithreading in C++11 standard (II)

In the previous examples we discussed ways to protect the data shared between mul-tiple threads. Sometimes it is not enough just to protect shared data, but it is also necessary to synchronize the operations executed by different threads. As a rule one

wants a thread to wait until an event occurs or until a condition becomes true. To this end, C + + Standard Library provides primitives such as condition variables and futures.

In C++ 11 Standard, condition variables have not one but two implementations: std::condition_variable and std::condition_variable_any. Both implementations can be used by including the header <condition_variable>. To facilitate the communication between threads, condition variables are usually associated with a mutex, for std::condition_variable or any other mechanism that provides mutual exclusion, for std::condition_variable_any.

The thread waiting for a conditional variable to become true should firstly lock a mutex using std::unique_lock primitive, the necessity of which we shall see later. The mutex is atomically unlocked when the thread starts to wait for the condition variable to become true. When a notification is received relative to the condition variable the thread is waiting for, the thread is restarted and blocks again the mutex.

A practical example may be a buffer that is used to transmit data between two threads:

std::mutex mutex;std::queue<buffer_data> buffer; std::condition_variable buffer_cond;

void data_preparation_thread(){ while(has_data_to_prepare()) //-- (1) { buffer_data data = prepare_data(); std::lock_quard<std::mutex> lock(mutex); //-- (2) buffer.push(data); buffer_cond.notify_one(); //-- (3) }}

void data_processing_thread(){ while(true) { std::unique_lock<std::mutex> lock(mutex); //-- (4) buffer_cond.wait(lock, []{return ! buffer.empty()}) //-- (5) buffer_data data = buffer.front(); buffer.pop(); lock.unlock(); //-- (6) process(data); if(is_last_data_entry(data)) break; }

}

Dumitrița [email protected]

Software engineer@ Arobs

Page 32: TSM_20_2014_en

32 no. 20/February, 2014 | www.todaysoftmag.com

programmingMultithreading in C++11 standard ()

When data is ready for processing (1) the thread preparing the data locks the mutex (2) in order to protect the buffer when it adds the new values. Then it calls the notify_one ( ) method on the buffer_cond condition variable (3) to notify the thread waiting for data (if any) that the buffer contains data that can be processed.

The thread that processes the data from the buffer firstly locks the mutex, but this time using a std::unique_lock (4). The thread then calls the wait ( ) method on the buff_cond variable condition, sending to it as parameters the lock object and a lambda function that is the condition for which the thread waits. Lambda func-tions are another specific feature of C + +11 standard enabling anonymous functions to be part of other expressions. In this case the lambda function []{return ! buffer.empty()} is written inline in the source code and it verifies if there is data that can be processed in the buffer. The wait ( ) method then checks if the condition is true (by calling the lambda function that was passed) and returns the result. If the condition is not fulfilled (the lambda function returns false), then the wait function unlocks the mutex and puts the thread on lock or standby. When the condition variable is notified by calling the notify_one ( ) function of from data_pre-paretion_thread ( ), the thread processing the data is unlocked, it locks again the mutex and checks again the condition leaving the method wait ( ) with the mutex still locked if the condition is fulfilled. If the condition is not met, the thread unlocks the mutex and waits again. This is why one uses std::unique_lock because the thread that processes the data must unlock the mutex while waiting and then it must lock it again. In this case std::lock_guard doesn’t provide this flexibility. If the mutex remained locked while the thread waiting for data to be processed is blocked, then the thread that prepares the data could not lock the mutex in order to insert the new values into the buffer, and the thread that processes the data would never have the condition met.

Flexibility to unlock a std::unique_lock object is not only used in calling the wait ( ) method, but it is also used when the data is ready for processing but before being processed (6). This happens because the buffer is only used to transfer data from one thread to another and in this case one should not lock the mutex during data processing, because it could be a time consuming operation.

FuturesAnother synchronization mechanism is a future, i.e. an

asynchronous return object (an object that reads the result of a condition/setting common to many threads) implemented in C++11 Standard Library through two template classes declared in the header < futures >:unique futures (std::future < >) and in shared futures (std::shared_future < >) , both modeled after std::unique_ptr and std::shared_ptr mechanisms.

For example, suppose we have an operation that performs a very time consuming calculation and the result of the operation is not necessary immediately. In this case we can start a new thread to perform the operation in the background but this implies that we need the result to be transferred back to the method in which the thread was released, because the object std::thread does not include a mechanism for this situation. Here comes the template function std::async, also included in the <future> header.

A std::async object is used to launch an asynchronous opera-tion whose result is not immediately necessary. Instead of waiting for a std::thread object to complete its execution by providing the result of the operation, the std::async function returns a std::future that can encapsulate the operation result. When the result is

necessary, one can call the get ( ) method on the std::future ( ) object and the thread is blocked until the future object is ready, meaning it can provide the result of the operation. For example:

#include <future>#include <iostream>

int long_time_computation();void do_other_stuff();

int main() { std::future<int> the_result = std::async(long_time_computation); do_other_stuff(); std::cout << “The result is ” << the_re-sult.get() << std::endl;

}

A std::async object is a high-level utility which provides an asynchronous result and which deals internally with creating an asynchronous provider and prepares the common data when the operation ends. This can be emulated by a std::package_task object (or std::bind and std::promise) and by a std::thread, but using a std::async object is safer and easier.

PackagesA std::package object connects a function and a callable object.

When the std::package <> object is called, this calls in turn the associated function or the callable object and prepares the future object in ready state, with the value returned by the performed operation as associated value. This mechanism can be used for example when it is necessary that each operation is executed by a separate thread or sequentially ran on a thread in the background. If a large operation can be divided into several sub-operations, each of these can be mapped into a std::package_task <>instance, which will be returned to operations manager. Thus the details of the operation are being abstracted and the manager operates only with std::package_task <> instances of individual functions. For example:

#include <future>#include <iostream>int execute(int x, int y) { return std::pow(x,y); }

void main(){ std::packaged_task<int()> task(std::bind(execute, 2, 10)); std::future<int> result = task.get_future(); //-- (1) task(); //-- (2) std::cout << „task_bind:\t” << result.get() << ‚\n’; //-- (4)

}

When the std::packaged_task object is called (2) the execute function associated with it is called by default, to which parame-ters 2 and 10 will be passed and the result of the operation will be asynchronously saved in the std::future object (1). Thus, it is pos-sible to encapsulate an operation in a std::package_task and obtain the object std::future which contains the result of the operation before the std::package_task object is called. When the result of the operation is necessary, it can be obtained when the std::future object is in the ready state (3).

PromisesAs we could see in the Futures section, sending data between

Page 33: TSM_20_2014_en

33www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

threads can be done by sending them as parameters to the func-tion of the thread and the result can be obtained by returning arguments by reference, using the async() method.

Another transmission mechanism of the data resulting from the operations performed by different threads is to use a std::promise/std::future. A std::promise <T> object provides a mechanism in order to set a type T value, which then can be read by a std::future <T> object. While a std::future object allows accessing the result data (using get () method), the promise object is responsible for providing the data (using one of the set_ ... () methods). For example:

#include <future>#include <iostream>

void execute(std::promise<std::string>& promise) { std::string str(“processed data”); promise.set_value(std::move(str)); //-- (3) }

void main(){ std::promise<std::string> promise; //-- (1) std::thread thread(execute, std::ref(promise)); //-- (2) std::future<std::string> result(promise.get_fu-ture()); //-- (4) std::cout << „result: „ << result.get() << std::endl; //-- (5)

}

After including the header <futures> where the std::promise objects are declared, a specialized promise object is declared for the value it must preserve, std::string (1). The std::promise object creates a shared state internally, which is used to save the value corresponding to the type std::string, and which is being used by the std::future object to obtain this value, as a result of the opera-tion of the thread.

This promise is then passed as a parameter to the function of a separate thread (2). The moment that, inside the thread the value of the promise object is set (3), the shared state becomes, by default, ready. In order to get the value set in the execute function, it is necessary to use a std::future object that shares the same state with the std::promise object (4). Once created the future object, its value can be obtained by calling get() method (5). It is important to note that the current thread (main thread) remains blocked until the shared state is ready (when the executed set_value method is executed (3)), meaning the data is available.

The usage of such objects as std::promise is not exclusively par-ticular to multithreading programming. They can be used also in applications with a single thread, in order to keep a value or an exception to be processed later through a std::future.

AtomicsIn addition to the mutual exclusion mechanisms above, the

C++11 Standard introduces also the atomic types.An atomic type std::atomic <T> can be used with any T type

and ensures that any operation involving the std::atomic <T> object will be atomic, that is it will be executed entirety or not at all.

One of the advantages of using atomic types for mutual exclu-sion is performance, because in this case a lock -free technique is used, which is much more economical than using a mutex which can be relatively expensive in terms of resources and latency due to mutual exclusion.

The main operations provided by the std::atomic class are the

store and load functions, which set and return atomic values sto-red in the std::atomic object. Another method specific to these objects is the exchange function, which sets a new value for the atomic object while returning the previously set value. Also, there are two more methods, compare_exchange_weak and com-pare_exchange_strong, performing atomic changes but only if the current value is equal to the actual expected value. These last two functions can be used to implement lock-free algorithms. For example:

#include <atomic>

std::atomic<int> counter = 0; //-- (1)

void increment() { ++counter; //-- (2) } int query() { return counter.load(); }

In this example the <atomic> header will be included first where the templete class std::atomic<> is declared. Then an atomic counter object is declared (1). Basically one can use any trivial, integral or pointer type as a parameter for the template. Note, however, the std::atomic<int> object initialization, it must always be initialized because the default constructor does not initialize it completely. Unlike the example presented in the Mutex section in this case the counter variable can be incremented directly, without the need to use mutex (2) because both the member functions of the std::atomic object and trivial operations such as assignments, automatic conversions, automatic increment, decrement are guaranteed to be run atomically.

It is advisable to use atomic types when one wants to use atomic operations, especially on integral types.

ConclusionsIn the previous sections we have outlined how the threads in

the C++11 Standard can be used, covering both the aspects of the thread management and the mechanisms used to synchronize the data and the operations using mutexes, condition variables, futu-res, promises, packed tasks and atomic types. As it can be seen, using threads from C++ Standard Library is not difficult and it will basically use the same mechanisms as the threads from the Boost library. However, the complexity increases with the complexity of the code design, which must behave as expected. For a better grasp of the topics above and expanding knowledge relating to new con-cepts available in the C++11 Standard, I highly recommend the book by Anthony Williams , C++ Concurrency in Action , and the latest edition of the classic The C++ Standard Library, by Nicolai Josuttis. You will find there not only a breakdown of the topics presented above, but also other new features specific to the C++11 Standard, including techniques for using them in order to perform the multithreading programming at an advanced level.

Page 34: TSM_20_2014_en

34 no. 20/February, 2014 | www.todaysoftmag.com

programming

In the article of the last issue, we talked about how we can measure software metrics by using Sonar. This is a tool that can be very useful not only to the technical lead but also to the rest of the team. Any team member can very easily check on the web interface of the Sonar what the value of different metrics is.

If we use Visual Studio 2013 as a development environment we will find out that we have the possibility to calculate some of the metrics right from the Visual Studio, without having to use other applications or tools. Within this article we will see the metrics that we can calculate by using directly what Visual Studio provides us with.

Why should we run such a tool?Such a tool can help us not only detect

the possible problems that our application might have, but also, it can reveal to us the quality of the code we have written. As we will see further on, all the rules and recom-mendations that Microsoft has in relation to the code can be found in this tool.

Some of the defaults discovered by such a tool are sometimes difficult to find by using unit tests. That is why a tool of this kind can reinforce our trust that the appli-cation we are writing is of a good quality.

Ce metrice putem să obținem?Starting with Visual Studio 2013, all

the Visual Studio versions (except for Visual Studio test Professional) offer us the possibility to calculate metrics directly in it. Right from the start we should know that the number of metrics we can calculate by using Visual Studio is limited. Unfortunately, we do not have the possibility to calculate all the metrics available in Sonar, but there are a few extensions for Visual Studio which help us calculate other metrics, too, not only those existing in Visual Studio.

Visual Studio allows us to calculate some of the metrics by using the Static Code Analysis. It analyses the code, trying to give the developers data on the project and the code they have written, even before

pushing it on the source control. Based on this analysis, we can identify possible problems related to:

• Design• Performance• Security• Globalization• Interoperability• Duplicated code• Code that is not being used

And many other problems. It all depends also on the developer’s ability to interpret these metrics. A rather interesting thing of this analyzer is the fact that all the rules and recommendations that Microsoft has in relation to the code, the style of the code, the manner in which we should use different classes and methods can be found within this analyzer. All these rules are grouped into different categories.

This way it can be extremely easy to identify areas of our application that do not use an API as they should. In case you wish to create a specific rule, you will need Visual Studio 2013 Premium or Ultimate. These two versions of Visual Studio allow us to add new rules that are specific to the project or the company we are working for. Once these rules added, the code analyzer will check whether they are obeyed, and if they are not obeyed, it will be able to warn us. Unfortunately, at the moment we can only analyze code written in C#, F#, VB and C/C++. I would have liked it very much to be able to analyze code written in JavaScript in order to be able to see what its quality is.

Some of our readers might say that this thing could also be done in older versions of Visual Studio. This is true. What the new version (2013) brought as new is the possibility to analyze the code without

having to run it. This thing could also be done more or less in Visual Studio 212.

How do we run this tool?These tools can be run in different ways,

manually, from the “Analyze” menu, as well as automatically. In order to be able to run them automatically, we need to select the option “Enable Code Analysis on Build” for each project that we wish to analyze.

Another quite interesting option is to activate from TFS a policy through which, before being able to check-in on TFS, the developer has to run this analyzer. This option can be activated from the “Check-in Policy” area, where we have to add a new “Code Analysis” type rule.

We must be aware that enforcing such a rule does not guarantee that the developer will also read the report that is being gene-rated and will take it into account. All that it guarantees is that this report is generated. That is why each team should be educated to observe these reports and to analyze them when we decide to use such tools.

The moment when we enforce this rule, we have the possibility to select which rules must not be breached when there is a check-in on TFS. For instance, one will not be able to perform a check-in on TFS for a code that uses an instance of an object implementing IDisposable without also applying the Dispose method.

When a developer will attempt a check-in for a code that does not obey one of the rules, he will get an error which won’t allow him to enter the modification on TFS without solving the problem first.

In addition, we have the possibility to also run this tool as part of the build. In order to do this, we have to activate this option from Build Definition.

Metrics in Visual Studio 2013

Page 35: TSM_20_2014_en

35www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

What does the Code Analysis tell us?The result of running this tool is a set of

warnings. The most important information that a warning contains is:

• Title: the type of warning• Description: a short description of

the warning• Category: the category it belongs to• Action: what we can do in order to

solve the problem

Each warning allows us to navigate exactly to the code line where the problem is. Not only that, but for each and every warning there is a link to MSDN which explains in detail the cause of the warning and what we can do to eliminate it.

How can we create custom rules?As I have already said before, this thing

can only be done through Visual Studio Premium or Ultimate. In order to do this, we have to go to “New>File>General>Installed Templates>Code Analysis Rule Set”.

Once we have a blank rule, we can spe-cify different properties that we want it to have.

Besides this tool, in Visual Studio there are also two other extremely interesting tools available.

Code ClonesThis tool allows us to automatically

detect code that is duplicated. The most interesting thing to it is that there are seve-ral types of duplicated (cloned) code which

it can detect:• Exact match: when the

code is exactly the same, with no difference

• Strong match: the code is similar, but not 100% (for example, it differs in the value of a string or in the action that is executed in a given case)

• Medium match: the code is pretty similar, but there are a few differences

• Weak match: the code resembles a little; the chances that this code be dupli-cated are the smallest

Besides this information, we can also find out, for each duplicated code, in how many locations it is duplicated and we can navigate up to the code line where it appears. Another metric which I like quite a lot is the total number of duplicated (clo-ned) lines. By this metric, we can quite easily realize how many code lines we could get rid of.

Code MetricsBy means of this tool, we can analyze

each project that we have to solve and we can extract different metrics. Being a tool integrated with Visual Studio, we can navi-gate in each project and see the value of each metric from the level of the project, to the level of namespace, class and method.

There are 5 metrics that can be analyzed by using Code Metrics:

• Lines of Code: this metric tells us the number of code lines that we have at the level of method, class, namespace, project. It is good to know that, when at project level, this metric indicates to us the total number of code lines that the project has.

• Class Coupling: we could say that this metric indicates how many classes a class is using – the smaller the value, the better.

• Depth of Inheritance: it indicates the inheritance level of a class – just like in the case of class coupling, the smaller the value, the better.

• Cyclomatic Complexity: indicates to us which the complexity level of a class or of a project is. We must be careful

because if we implement a complex algo-rithm, then we will always have a rather high value for this metric.

• Maintainability Index: is a value between 0 and 100 which indicates how easily the respective code can be maintai-ned. A high value indicates that we have big problems (over 20). Any value below 10 shows us that we are in a good area and anything between 10 and 20 is of a medium level. It is not serious, but we have to be careful. This metric is calcula-ted according to other metrics.

ConclusionIn this article we have discovered that

Visual Studio provides us with different methods to assess the quality of the code. Some of these tools are available in normal versions of Visual Studio, and others only in the Ultimate version. Compared to Sonar, Visual Studio does not allow us to share these metrics through a portal. Instead, it allows us to explore them in Excel in order to be able to send them to the team. The Visual Studio tools are a good start for any team or developer that wishes to see the quality of the code written by him or by the team.

Radu [email protected]

Senior Software Engineer@iQuest

Page 36: TSM_20_2014_en

36 no. 20/February, 2014 | www.todaysoftmag.com

The days of counting cycles for assembly instructions are long gone (unless you work on embedded systems) - there are just too many additional factors to consider (the operating system - mainly the task scheduler, other running processes, the JIT, the GC, etc). The remaining alternative is doing empirical (hands on) testing.

Use percentilesSo we whip out JMeter, configure a load test, take the mean

(average) value +- 3 x standard deviation and proudly declare that 99.73% of the users will experience latency which is in this inter-val. We are especially proud because (a) we considered a realistic set of calls (URLs if we are testing a website) and (b) we allowed for JIT warm-up.

But we are still very wrong! (which can be sad if our company writes SLAs based on our numbers - we can bankrupt the com-pany single-handedly!)

Let’s see where the problem is and how we can fix it before we cause damage. Consider the dataset depicted below (you can get the actual values here to do your own calculations).

For simplicity there are exactly 100 values used in this exam-ple. Let’s say that they represent the latency of fetching a particular URL. You can immediately tell that the values can be grouped in three distinct categories: very small (perhaps the data was alre-ady in the cache?), medium (this is what most users will see) and poor (probably there are some corner-cases). This is typical for medium-to-large complexity (ie. „real life”) composed of many moving parts and it is called a multimodal distribution. More on this shortly.

If we quickly drop these values into LibreOffice Calc and do

the number crunching, we’ll come to the conclusion that the

average (mean) of the values is 40 and according to the six sigma rule 99.73% of the users should experience latencies less than 137. If you look at the chart carefully you’ll see that the average (mar-ked with red) is slightly left of the middle. You can also do a simple calculation (because there are exactly 100 values represented) and see that the maximum value in the 99th percentile is 148 not 137. Now this might not seem like a big difference, but it can be the difference between profit and bankruptcy (if you’ve written a SLA based on this value for example).

Where did we go wrong? Let’s look again carefully at the three sigma rule (emphasis added): nearly all values lie within three standard deviations of the mean in a normal distribution.

Our problem is that we don’t have a normal distribution. We probably have a multimodal distribution (as mentioned earlier), but to be safe we should use ways of interpreting the results which are independent of the nature of the distribution.

From this example we can derive a couple of recommendations:1. Make sure that your test framework / load generator / ben-

chmark isn’t the bottleneck - run it against a „null endpoint” (one which doesn’t do anything) and ensure that you can get an order of magnitude better numbers

2. Take into account things like JITing (warm-up periods) and GC if you’re testing a JVM based system (or other systems which are based on the same principles - .NET, luajit, etc).

3. Use percentiles. Saying things like „the median (50th per-centile) response time of our system is...”, „the 99.99th percentile latency is...”, „the maximum (100th percentile) latency is...” is ok

4. Don’t calculate the average (mean). Don’t use standard deviation. In fact if you see that value in a test report you can assume that the people who put together the report (a) don’t know what they’re talking about or (b) are intentionally trying to mislead you (I would bet on the first, but that’s just my opti-mism speaking).

Look out for coordinated omissionCoordinate omission (a phrase coined by Gil Tene of

Azul fame) is a problem which can occur if the test loop looks something like:

start: t = time() do_request() record_time(time() - t) wait_until_next_second() jump start

Latency is defined as the time interval between the stimulation and response and it is a value which is of importance in many computer systems (financial systems, games, websites, etc). Hence we - as computer engineers - want to specify some upper bounds / worst case scenarios for the systems we build. How can we do this?

How (NOT TO) measure latency

programming

Page 37: TSM_20_2014_en

37www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

That is, we’re trying to do one request every second (perhaps every 100ms would be more realistic, but the point stands). Many test systems (including JMeter and YCSB) have inner loops like this.

We run the test and (learning from the previous discussion) report: the 85% of the request will be served under 0.5 seconds if there are 1 requests per second. And we still can be wrong! Let us look at the diagram below to see why:

On the first line we have our test run (horizontal axis being time). Let’s say that between second 3 and 6 the system (and hence all requests to it) are blocked (maybe we have a long GC pause). If you calculate the 85th percentile, you’ll 0.5 (hence the claim in the previous paragraph). However, you can see 10 independent clients below, each doing the request in a different second (so we have our criteria of one request per second fulfilled). But if we crunch the numbers, we’ll see that the actual 85th percentile in this case is 1.5 (three times worse than the original calculation).

Where did we go wrong? The problem is that the test loop and the system under test worked together („coordinated” - hence the name) to hide (omit) the additional requests which happen during the time the server is blocked. This leads to underestimating the delays (as shown in the example).

Make sure every request less than the sampling interval or use a better benchmarking tool (I don’t know of any which can correct this) or post-process the data with Gil’s HdrHistogram library which contains built-in facilities to account for coordina-ted omission

This post is part of the Java Advent Calendar and is licensed under the Creative Commons 3.0 Attribution license. If you like it, please spread the word by sharing, tweeting, FB, G+ and so on!

Attila-Mihaly [email protected]

Code Wrangler @ UdacityTrainer @ Tora Trading

Page 38: TSM_20_2014_en

38 no. 20/February, 2014 | www.todaysoftmag.com

management

Thinking in Kanban

The first Kanban system was cre-ated more than 60 years ago at Toyota in their effort of excelling on the automobile market. At that moment Toyota couldn’t compete based on technology, marketplace or volume of cars produced, so they choose to compete by redefining their method of organizing and scheduling the process of production. Toyota Production System laid the foundation of Kanban, with the follow-ing directions:

• Reducing costs by eliminating waste.• Creating a work environment that

responds to change quickly.• Facilitate the methods of achieving

and assuring quality control.• Creating a work environment based

on mutual trust and support where employees can reach their maximum potential.

Even if, along the years, IT has reshaped Kanban by assigning new values to it and completing it with metrics and rules, the general effect is still about the ideas expressed back then.

Kanban Principles Kanban is based on two fundamental

principles:1. Visualize the workflow2. Limit work in progress

The visual effect is obtained by a Kanban board, where all the tasks are map-ped depending on their current state. The states of the board are defined according to the complexity of the project and the num-ber of existing steps in the process. Tasks are written on colored cards (or sticky-notes) and this way they are understood and processed more quickly and easily. As a result, the current state of the project

becomes visible at any time for all team-members and this global view of the state of the project facilitates the rapid discovery of problems or bottlenecks.

The basic structure of a Kanban board is resumed to three states (columns): To-Do, Ongoing (or In Progress), Done. However, states can be defined according to project specific needs. A classic example of Kanban board for software development can be found in the attached figure.

The WIP (Work-In-Progress) limit ensures focused and thus more effective work. The columns of type „In progress” are each assigned such a limit and the number of ongoing tasks shouldn’t exceed the specified WIP limit. By reducing mul-titasking, the time needed for alternation between tasks is eliminated. By performing tasks sequentially, results appear faster and the overall consumed time becomes shorter.

Metrics in KanbanIn order to estimate as accurately

as possible the dimensions of time and

workload, we use Kanban metrics that help us define them. Like the basic principles, the calculation of metrics is also simple.

Lead Time: is the time measured from the moment of introduction of a task in the system until it is delivered. It is important to remember that Lead-time measures the time and not the effort needed to execute a task.

Lead-time is the metric most relevant to the client and he will evaluate the team’s work-performance according to it.

Cycle Time: is the time measured from the moment when work has begun on a task until it is delivered. Compared to Lead-time, this is a rather mechanical mea-sure of the process capabilities and reflects the efficiency of the team.

To increase a team’s performance Lead-time and Cycle-time metrics should be reduced.

Little’s Law:

To improve Cycle-time there are two possible options:

• reducing the number of tasks in progress

• improve the completion rate of tasks

By reducing Lead and Cycle-time metrics, the development team can assure

Visual card - this would be the exact meaning of the Japanese word “Kanban”, a term widely used nowadays in the world of IT. The meaning that we recognize today refers to the software development methodology famous for its simplicity but also efficiency.

Page 39: TSM_20_2014_en

39www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

on-time delivery of the product.

Flexibility in KanbanA Kanban board is completely configu-

rable according to the purpose it serves, quality that enables the methodology to be used in a variety of areas. With roots in production and manufacturing, it has a natural way of fitting into any non-IT busi-ness process.

A Kanban board can be configured according to the domain and according to the stages needed to get to the final prod-uct/ service.

Apart from IT, here are some areas where a Kanban system can easily integrate:

• Marketing and PR• Human Resources• Logistics and Supply Chain• Financial• Legal

The short and long term benefits are similar to those mentioned in the IT field: better visibility of workflow, increased pro-ductivity and improved team collaboration.

Types of Kanban

Physical board. Online boardInitially, the notion of a Kanban board

was quite simple, a board or an impro-vised space on the wall where cards or sticky-notes with tasks written on them were pinned to. The board was in the room where team members worked and repre-sented their focus point. The concept of having a physical board is still very popular and considered an excellent opportunity to improve collaboration and communication among team members.

However, the increased interest in the use of Kanban methodology has inspired a number of programs and online tools offering the similar functions of a physi-cal board. Moreover, they have a number of additional possibilities and advantages: easy configuration, archiving tasks, editing,

classification, timer, remote access, colla-boration between teams divided in several locations, etc.

Time-driven & Event-driven KanbanI have talked before about flexibility

and about the way we can create a structure of Kanban according to the specific needs of the proposed project. Over the years, a few general structures have proved to be elementary in the use of the methodology.

Time-driven Kanban Board: used in temporal planning of activities.

Event-driven Kanban Board: useful when an external intervention (e.g. appro-val) is needed to continue the execution of tasks in the process.

Personal KanbanIn 2011, Jim Benson first introduced the

idea that Kanban is perfect for organizing personal time, tasks and activities. The concept became increasingly popular and Personal Kanban users say that the method really works.

In these cases we encounter boards with a simplified process and many visual effects. Finally, the purpose of Personal Kanban is to improve personal productivity and facilitate achievement of long-term goals.

Scrumban. PomodoroBan.The suggestive names already indicate

the combination of the Kanban with other methods and techniques.

Scrumban is the combination of Scrum and Kanban. Briefly, it means the

application of principles and rules of the two methodologies according to the pre-ference of the team: “making the most of both”.

PomodoroBan combines the Pomodoro technique and Kanban. According to the Pomodoro technique efficiency can be achieved by alternating the following two cycles: 25 minutes of focused and uninterrupted work, 5 minutes break. PomodoroBan keeps and applies the prin-ciples of Kanban but focuses on additional efficiency.

ConclusionsWhether we apply Kanban in IT, related

domains or even personal life, it seems that „less is more” is true every time. Kanban is distinguished by its ease of application to any process, the simplicity of basic prin-ciples and the fast improving effects on quality and work processes.

Referenceshttp://www.kanbanblog.com http://kanbantool.comJim Benson: Personal Kanban (2011)

Püsök [email protected]

Functional Architect@ Evoline

Page 40: TSM_20_2014_en

40 no. 20/February, 2014 | www.todaysoftmag.com

Today we know as a fact that, before starting a new project, no matter the industry, there is need for a plan. In most fields, the business plan is most used. More recently, in the software industry the term of POC appeared.

According to Wikipedia, a POC (proof of concept) is defined as the achievement of methods or ideas in order to demonstrate its feasibility. In some cases this takes the form of a prototype which provides infor-mation about the future of the project, identify needs, outlines the main charac-teristics and presents risks.

POC concept quickly gained trust in engineering and software industry and is appreciated both by the client and contrac-tor as it allows the stakeholders to roughly estimate the effort, and what needs to be done and what resources are needed. A POC involves a mix of strategies such as characteristics, price, market, branding and business model.

As reported by Marius Ghenea in an interview during Business Days, business sites that he invests his money in must be validated in the market. “It is essential to have a proof- of- concept, which means a certain validation that can be done in the beginning, before the product or service is put on the market. This validation can be made with focus groups, beta -testers, pilot projects, limited commercial tests, other testing market. If the market does

not validate the business concept, we do not have good premises for the future busi-ness, even if the idea seems spectacular and unique business. “

In a software project there is a frequent need to demonstrate the functionality before starting the development itself. Moreover, there is no surprise that more investors and business angels like the idea of POC instead of a classic business plan.

Wayne Sutton , blogger at The Wall Street Journal claims the same truth in his article ‘ Do not Need No Stinking ‘ Business Plan ‘. The argument would be that starting a new business is much easier than 15 years ago and compares the business plan with the waterfall method of software develop-ment, which is quite obsolete.

Everything must be fast, ‘ lean ‘, learn-ing all the time with the clients you serve. A true entrepreneur should always involve his clients, make use of his experience in

everyday life as much as possible. The author refers exclusively to start- ups in the IT industry and the classic businesses are put aside.

Since the business environment in Romania is still developing and we are moving on with baby steps, the business plan still remains an important point of reference for young entrepreneurs.

This being said, I would mention some important documents for a tech start- up:

Use Cases - who the customers are and how they use the product / service

Sales Plan - what, how, where, how and who will sell your product / service

Human resources - ensuring business continuity even if people leave the firm

Cash flow - how much money is needed and when

It’s a pleasure to watch as an observer how businesses are growing. Still, there are a few that think about the roadmap to success, from where it all begun and which were the first steps that led to success.

New business development analysis

management

Page 41: TSM_20_2014_en

41www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

Another issue that is worth mentioning is what I noticed about young entrepre-neurs. At first start, it’s hard for them to distinguish between core -business (main activity) and other additional activities or support they need to make.

In a start- up it is hard to determine the main functions that must satisfy a system and additional functions. However, it is vital for the success of the business to make a list of all the possible features of the future system, then prioritize and divide them into primary and secondary features.

All features encapsulated as use cases help the entrepreneur to stick to his initial plan.

I will continue speaking about the use cases as I consider them relevant to tech start -ups.

Use cases are a way to use a system to achieve the specific goal for a specific user. For example, a user logged on Amazon wants to be able to pay with credit card whenever he buys something. This can be seen as a goal that needs to be reached: As a logged user on Amazon, I want to pay with

my credit so that I can buy whatever I want. The sum of goals make up a set of use

cases that shows the way in which a system can be used and also shows the business value for the customers.

The book ‘Use- Case 2.0 The Guide to succeeding with Use Cases’ written by Ivar Jacobson, Ian Spence, Kurt Bittner presents the development based on use cases in a very affordable and practical way.

Use cases can be used in the develop-ment of new businesses, in which case all the business are associated with the system. The system is implementing the require-ments and is the subject of the use case model. The quality and level of completion of the system is verified by a set of tests. The tests are designed to verify the implemen-tation of the slices, if the use cases was a success or not.

Going in further detail about how to use the use cases, I think everyone is already familiar with the concept of user stories. These ‘stories’ are the link between stake-holders, use cases and parts of use cases. This way to communicate the requirements for the new system is widely used because it helps identifying the basic parts that have to be implemented for a basic functional system. When describing a business idea, the best way to tell the others what it’s about begins with ‘ I want to make an application which allows users to control smartphones

from taxi through an application that can be installed on Android and iOS’.

I conclude here my short introspection into new business development analysis mentioning that without a clear idea and a structured plan, the odds of having a suc-cessful project are limited.

Whether you start from a classic busi-ness plan, make use of POC or you reach a prioritized list of use cases for the new system, planning every step must be made carefully.

Every entrepreneur has to act with responsibility as to avoid all known risks.

References1. http://blogs.wsj.com/accelerators/2012/11/29/embrace-the-executive-summary/2. Use-Case 2.0 The Guide to Succeeding with Use Cases, Ivar Jacobson, Ian Spence, Kurt Bittner3.http://www.avocatnet.ro/content/articles/id_30930/In/ce/investesc/antreprenorii/care/au/adoptat/o/cariera/de/business/angel/in/Romania.html#ixzz2pwAZCe5

Ioana [email protected]

Project Manager@ Ogradamea

Page 42: TSM_20_2014_en

42 no. 20/February, 2014 | www.todaysoftmag.com

Most of the times, one’s got only 60 seconds to talk about what he knows to do best in his business or in his job, in front of his partners or co-workers.

A pitch is a speech of no longer than 30 seconds - 3 minutes, generally, delivered apparently in a spontaneous manner and it allows one person to promote his/her com-petencies, services and also the added value that he/she brings through these.

Although the pitching concept started to be promoted recently, in the business environment, e.g. in the startup ecosystem, just like we can see it happening in the USA, in Europe and even in Cluj, Romania, in the last few years, it describes and refers to a reality which is lots older than that. Throughout all the times, since prehis-toric times, the leaders of the peregrine people were conquering new lands and took ownership of them through short proclaiming speeches. Actually, the very simple decision of choosing the maternal language in the daily interactions, over the newly conquered lands, was the basis of nowadays business pitches. The peace messengers and the religious missionaries were also predecessors of today’s pitchers, through the messages that they were brin-ging to new people, spreading the word or even persuading through it.

Why would anyone deliver a pitch? Well, this ability is inborn, they say that the most gifted salesmen are .. children. It would be very difficult for us to give up to this instinctual tool of attaining our objec-tives, which is the pitch, be it in personal or professional environments. From another point of view, we have the need of sociali-zing and of getting connected, and for this we deeply need persuasion and persuasion skills, just like we need money. On top of everything else, beyond any other reason of

pitching, it’s a starter. „A pitch is not used only to convince the other to adopt your ideas, but to offer something which is that attractive, that it determines the start of a conversation.” (Daniel H. Pink, To Sell is Human)

If you take a look around you, in the social circles that you attend, or even in your past experiences, be it personal or professional, you will be able to notice for sure that in numerous contexts, you have already used a few introductory words, having also an explanatory, descriptive or motivating role into doing this or that. What I am trying to say is that we do pitching in every area and in any moment of our lives, be it for earning more pocket money, in childhood, be it for a salary raise or to gain an angel investor or an invest-ment for our startup business.

Now that we are calibrated on why would one use pitching in his daily life, the next step is to follow through the structure of a pitch and learn how to create one your-self. Firstly, a pitch should not have more than four high level objectives, thus, maxi-mum four long phrases.

To start with, it’s recommended to say your name and a key role or characteris-tic that you have in your business, project, role or team. The name is a unique figure and it offers identity and authenticity to your role, profession or to the business that you are developing. The key role is critical also, because it offers authority to you and your presentation and it will make people believe in you and in what you have to say next. Through the characteristic that you will choose to present as key descriptor in the beginning of the speech, be it CEO of your own company, expert developer for 7 years, a great lover of medieval reading, whichever of these you might choose or

any other, the key characteristic needs to be in line with your project and your area of expertise and it will determine the audi-ence to fully listen to you.

Afterwards, in the second part of your pitch, or in the second phrase of it, you will have to say what is it that you actually do in your project and how you will bring added value through that. There is a very good example from Phil Libin, which I would recommend to you too: „[I am Phil LIbin, the CEO of Evernote.] Evernote is your external brain.” Phil creates an excellent method of delivering in a pitch the crisp idea of added value through a metaphor which is very powerful. The pure adver-tising objective of any pitch can be found here, in the second part of it and, also, in the end, in the call to action. Therefore, the preparation and the creative effort from your side should be mostly focused in these two critical parts of the speech.

1. Who I am and a key characteristic2. What I do and how I bring added

value through my product.3. How I differentiate myself from the

competition.4. What my objectives are on short or

medium term. Why do I deliver this pitch? Call to action.

Business pitching or how to sell in 4 simple steps

Has it ever happened to you to attend a great event or dinner party with great people that you wanted to talk to, but you had no clue how to break the ice? Or you just wanted to exchange some ideas and get connected to some of them but it seemed like words weren’t there for you to make you feel at ease?

management

Page 43: TSM_20_2014_en

43www.todaysoftmag.com | no. 20/February, 2014

TODAY SOFTWARE MAGAZINE

Furthermore, in the third part of the speech, you need to say how you are diffe-rent from your competition. There are high chances that what you offer on the mar-ket through your product or service, as a professional, through the competencies or through your services, is already being offered by someone else or by some other organization. Therefore, you need to highli-ght in a subtle manner, shortly, how you are different. Depending on the duration of your speech, this part can be more concise than other parts, in order to get adapted to the rhythm and conciseness of your presen-tation. Many times, it is not even needed to name the organizations, the startups or the experts that you split your market with, but you only need to express powerfully what it is that you do best: we are not only offering an online data storage „cloud” platform, but it’s also „your external brain”.

Finally, I recommend you to name the objectives that you have on short term (no more than 1-2 of them), which will actu-ally give the flavour to your pitch, directing the audience into the specifics of it and into the reason for which you have deci-ded to interact with them. For example, at the startup competition that you will be attending, there will be pre-selection and selection stages that will determine you to pitch your idea over and over again. The goals of the repeated pitches might be diffe-rent, like, for example, to get new members in your team, depending on their abilities and expertise (online marketing, program-ming, copywriting, programming etc.), to get new investments for your business, to promote the team or the business idea and to include your team in an accelerator or business incubator.

Actually, through the conclusion of your pitch, you will show others how they can help you, what you want to get from them and also what is the stage you are with your business. Out of the unwritten rules of persuasive speaking, that you can excellently practice in any Toastmasters Club, I can recommend the use of power-ful words, like verbs, who will call to action. “This is who we are, come join our team!”, „If you are a iOS developer and you want to

become famous, come join us!”, „Let’s make the world a better place through an online platform meant to improve the quality of your relationships, so please support our crowdfunding campaign.”

If I already convinced you that abso-lutely everyone needs pitching, if you are used to the reason for which you would need that yourself, if you realized that you have already used a pitch formally or infor-mally several times now, but you did not know that it’s called this way, the only step left for you to complete is to expect and prepare for pitching opportunities. This type of business speech should be sponta-neous, but practiced, nevertheless. If you are looking for partners to implement your business idea, for a new job and you are getting ready for interviews, it’s definitely worth investing one hour of your time to identify the basis of a great customized pitch for yourself.

You can write it on a paper, tape it with a video camera or a smartphone or ask a friend to watch you, in order for you to be able to identify the best words, the tone, the content that best promotes you. Afterwards, you only need to look for advertising opportunities, for networking events, for job fairs or celebration times in your life or job, you enjoy the atmosphere and you answer to a very simple question: „What is it that you do?”

Every one of us can need several pit-ches, according to every project which he is involved in, to every role or project that he is a part of. This way, the speech can be easily adapted to the audience, through a change in perspective and in language. Technically speaking, what changes is the key role, the name of activity and added value, the way you differentiate and the call to action. If you speak of the same business or project, instead, and the only thing that is different is the stage you are in at the moment, then you will only change the call to action. At Startup Weekend, you might be looking for teammates, while at Le Web, you might be already looking for angel investors or new investment rounds, after you had launched the minimum valuable product of your business.

It was a real challenge for me to cre-ate my first pitch. In the beginning, I did not feel myself, writing down and rehear-sing just like for a school contest who I am and what I want from the others. It was a psychological and motivational chal-lenge. Nevertheless, the benefits of such an effort have been higher than the costs. So, I learned to sell my ideas with enthu-siasm and determination. Set your own goal to answer in an excellent manner, one that would make you proud, to the ques-tion „What is it that you do?” from now on in the following 3 months, and you will see pitching as you second nature. It will become a habit that will bring you money, friends, results, partners or, at least, plenty of exciting conversations.

Referenceshttp://www.elevatorpitchexamples.com/www.businessweek.comhttp://mindyourpitch.com/www.forbes.com

Ana-Loredana [email protected]

Training Manager@ Genpact

Page 44: TSM_20_2014_en

44 no. 20/February, 2014 | www.todaysoftmag.com

others

“Tell me again, dad, what do you do at work?”I muck about, was the thought that crossed Gogu’s mind, but

he refrained from saying it aloud. He was spiteful for not having succeeded till now to offer an answer so that the child could understand, neither had he an idea about what explanation to give. Oh, brother… It isn’t easy to manage a project, but it looks like it’s even harder to explain how you do that. He grumbled:

“I am in charge of projects; that’s what I do. But what are you about again?”

“Our teacher told us to invite our parents to tell us what they do at work, to give us an example. And she also told us to ask them to come dressed in their uniform… if they have one”, he quickly added upon seeing Gogu’s grimace. “Mircea’s dad is a firefighter, so he can come in his uniform, and Maria’s mom is a doctor and she has an overall, and Danut’s dad is a cop…”

“I see…”- said Gogu – “and what would you like me to wear?”The child looked at him with his big eyes; one could see that

he was intensely thinking whether he had ever seen his dad in some sort of special attire. He probably didn’t find anything in his memory, ‘cause he said nothing more, he just remained staring at his father, waiting for a verdict.

And so, the plot thickens, thought Gogu. Instead of explaining to a single child, now I have to explain to an entire class. And I don’t even have a uniform… He reviewed the child’s list: fireman, doctor, policeman. How can you compete with them? They surely have stories to tell, spectacular situations in eager rivalry. Plus the uniform… The child’s voice broke the daydream:

“And what do they call you?”Superman. Or Wonderwoman, respectively. No, he didn’t say

this aloud. Instead, he asked:“When did you say this meeting of yours with the parents is?”“Anytime next week, provided we let the teacher know in time.

This is what she said.”This is what the teacher said, Gogu mocked the child to him-

self. There goes my possibility to escape. What excuse can you find for an entire week?! Yes, this was something serious.

“Let me think about it and I’ll tell you when I can come so that you can let your teacher know. And I will also think about what I’m going to say, so as not to embarrass you. Ok?”

The child’s eyes glittered, he smiled with satisfaction, kissed his dad and, clearly relieved, uttered above his shoulder while exiting the room:

“Yes, dad, very well. I’m counting on you.” Firefighter, doctor, policeman. For the first time Gogu thought

he had a dull job. No, I don’t, he came back to his senses. ‘Cause I like what I do. It’s just that I don’t have a uniform, he added with a bit of sorrow. But it is not the coat that makes the man. And then, I really am a kind of superman sometimes… He found himself smiling at the thought of some tights and a red cape hung over his shoulders, but he banished the hilarious image to concentrate on the speech.

***Gogu was sweating and his hands were cold. He, who used to

unblinkingly deliver presentations for the management or for the client, was fearsome at the thought of talking to some kids, his son’s colleagues. He realized he wanted to make an impression on them more than on any potential client.

All eyes were fastened on Gogu. He looked about for his son, saw him and smiled at him. Then, he focused on him and began:

“My son told me that until now you have heard about the jobs of a mechanic, a firefighter, a doctor and a policeman. Is that true?”

“And the pharmacist!” added a little girl from the back of the classroom.

Another one with a uniform, Gogu couldn’t help thinking; you couldn’t find an engineer… But he knew exactly what he was going to tell the kids, so he no longer hesitated and went on, taking out of his bag a thick rubbery raincoat, which was quite shabby and smelled like salted water:

“I am a project manager.” He put his raincoat on, made a thea-trical pause, laughed to himself and continued: “And I’ve got the raincoat of a sea captain. Because this is our function - that of a captain, except that it is on land. We coordinate the team that successfully brings the ship to its destination. From the moment we get in charge of the ship, we decide if we are going to use the engine or the sails and we coordinate the team to keep the com-pass course, to schedule a trip, to use the navigation tools, to lift up or to lower the sails. Everyone in the team knows their role, but the captain is the one who helps them synchronize, who interprets the weather forecast, monitors the position on GPS, decides if they anchor for the night or if they continue along their way. In nice weather or under a storm, my job is to keep the team safe and to make sure their work is not affected by external factors.

A little fair-haired girl with cornrows, sitting in the first row, suddenly put her hand up, but didn’t wait for Gogu’s approval:

“Sir, but who is in your team?”“Anyone can be in my team: mechanics, firemen, doctors, poli-

cemen… even pharmacists.”“And the raincoat, what do you need it for?”

Gogu and the Ship

Simona Bonghez, [email protected]

Speaker, trainer and consultant in project management,

Owner of Colors in Projects

Page 45: TSM_20_2014_en
Page 46: TSM_20_2014_en

powered by

sponsors