Review of JavaScript Promises Essentials

Promises are an old pattern in concurrent programming. In the computer science literature, promises date back to the papers by Friendman and Wise from mid 1970s. Many programming languages have promise - and futures as in Java are a similar idea. A promise is a variable or an object which value is initially unknown and is the result of another task. Modern JavaScript development is highly asynchronous by design: the UI (in the web browser) is updated by calls to the backend using HTTP requests.

JavaScript (ECMAScript to be correct) will soon have promises build into the core of the runtime environment, and promises already exist in many libraries and frameworks. If you haven’t worked with promises in a JavaScript context before, now is a good time to begin. In order to ease the use of promises in JavaScript development, the Promises/A+ standard has emerged.

The book is short and divided in six chapters. The first chapter is an introduction to the state of front-end JavaScript development in general and the asynchronous model in particular. The chapter sets the scene for the rest of the book, and discuss how promises fit well into JavaScript.

The second chapter is written in a very dense format using many bullet lists. You are probably going to read the chapter more than once to get all the information. The good news is that is a lot of information. Chapters 2-4 are a presentation of how to use promises, include error handling.

Chapter 5 is devoted to WinJS - the open source library for development of Windows 8 and Windows mobile applications in  JavaScript. Personally, I don’t develop for any of the Microsoft stacks, but the chapter is still useful as it shows how a widely used library is applying the promise concepts.

The final chapter explains how to implement your own promise library. I guess that there exist many in-house JavaScript libraries. If you have such a library, you will find this chapter useful.

In general, the book is well-written. The usage of bullet list many place (and chapter 2 in particular) can make it hard to read the book. Still, any JavaScript developer should read the book and get ready to the great promise of promises. You can find more information about the book here.


Happy birthday, Grace

Happy birthday, Grace

Yesterday, Grace Hopper would have turn 107 years old. She was one of the first computer scientists. She worked on compilers but she is probably best known for her development of COBOL. COBOL is one of the first high-level programming languages (FORTRAN is the other one). Back in the 1950s, computers were used as calculators, but COBOL and Grace Hopper shown us that computers can be used for much more that simple calculations. Today, COBOL is still wildly used.

As a tribute to Grace Hopper, I have played a bit with the OpenCOBOL compiler. OpenCOBOL can translate COBOL source code to C and use a C compiler to generate a real executable program. The compiler is licensed under GNU General Public License (version 3), and the runtime is licensed under GNU Lesser General Public License (version 3). That is, OpenCOBOL is free software!

Ubuntu Linux and Linux Mint have packages for OpenCOBOL. Probably other Linux distributions and FreeBSD have packages as well but I haven't checked. The installation (using Linux Mint) is simple:

sudo apt-get install open-cobol

I haven't tried to write software in COBOL. The closest I ever came is that I worked on converting between 4th generation languages (4GL) a few years ago. 4GL programs are often converted to COBOL. But I have written a short program in COBOL:

      * Greeting to Grace Hopper
       PROGRAM-ID.    greeting.
       01 WS-TODAY PIC X(21).
       01 WS-YEAR  PIC 9(4).

           MOVE WS-TODAY (1:4) TO WS-YEAR.
           SUBTRACT 1906 FROM WS-YEAR.
           DISPLAY "Happy birthday"
           DISPLAY WS-YEAR
           EXIT PROGRAM
       STOP RUN.

Compiling the program using OpenCOBOL is easy:

cobc -x greeting.cob

and running is easy:


I am not going to do a lot of COBOL programming in the future, but it seems to possible to replace old legacy operating systems with a modern platform like Linux.


Review: Storm - real-time processing cookbook

Review: Storm - real-time processing cookbook

Quinton Anderson has written a book on the analysis platform Storm and published it at Packt. I have worked a little with Hadoop in the last couple of years, and it is only natural to take a look at the other big data processing platform. Hadoop is a batch processing platform, and Storm is for real-time processing and analysis of data. That means the two projects are not direct competitors, and they might complement each other.

When reviewing a technical book for the general public, it is important not to review the technology but the book. You can easily write a crappy book on excellent technologies while the opposite is very difficult. This review should be read in this context.

The author starts out by explaining how to set up Storm. Storm seems to be quite a complex beast, but Mr. Anderson gets nicely through it. I would have preferred to get an introduction to some of the concept before jumping into the many tasks. But it is my personal preference, and this is a cookbook and not a text book for a university course.

The first couple of chapters is about processing real-time data. Twitter and log files are the canonical examples in this area, and the book utilizes these as well. 

One chapter are on how to use C++ and Qt in your real-time data processing. If you think that Qt is about graphical user interfaces, the book will show you that Qt is a lot more. The author is using Qt's non-GUI parts in his processing.

Another chapter is about machine learning. As part of the big data revolution, machine learning has become popular again. Machine learning is a topic that Mr. Anderson is passionate about, and he analyzes in great detail the problem before showing the recipe. 

One of the major disadvantages of the book is that the assumptions about the reader are pretty hard. The reader is assumed to know at least about:
  • Java development using and Maven and Eclipse
  • Some functional programming
  • Ubuntu or Debian (or any UNIX) command-line
  • Web development (HTML, CSS, JavaScript, JSON)
  • Data modelling experience
  • A little about NoSQL
This problem does not really from the author but from Storm. But the author might have chosen other examples. The book is not a university text book, and therefore I would like many more references to text book and papers.

There is a lot of source code in the cook. You should really download the examples as some of them are longer than a page. It would be great if Packt would do some kind of syntax highlighting. Most programmers find it easier to read syntax highlighted code. In particular, electronic books (I have read the book as PDF) can easily be colorized!

One of the things I really like about the book is that the author has taken time to craft understandable diagrams. A well-composed diagram is worth many words, and he often sums up the key points in a diagram. In general the author writes in a rather dry or fact based language. But on page 111, you find that the author cannot suppress his humor: "... for  automating Clojure projects without setting your hair on fire."

I'm not going to say that Storm is a crappy technology but Quinton Anderson has done the job well by writing a good cook book.

If you are serious in getting into data science and data processing, I wouldn't hesitate to recommend the book. You can find the book at http://www.packtpub.com/storm-realtime-processing-cookbook/book.


Review: Instant ExtJS Starter

Recently, Packt Publishing published a book by N. Bhava on ExtJS. It's a fairly short book, about 60 pages long. The idea is to get you started with ExtJS. If you don't know what ExtJS, it is a framework to develop the front-end of web applications. Today, users except web applications to behave must like desktop applications, and a typical web developer is much more like a GUI programmer just a few years ago. In order to get started with ExtJS, the book is an excellent introduction. By reading the book, it quickly becomes obvious that today's web developers just have a great deal of knowledge of JavaScript, object-oriented programming, HTML and CSS. JavaScript is a class-less object-oriented programming language. But ExtJS is based on the notion of classes, and they are somehow emulated by plain JavaScript. The author does not go into details - probably due to the limited length of the book. A book of less than 60 pages cannot scratch the surface no matter how great the author is. Indeed, the book is quite well written, and the flow of the book is fine. But I must be honest, I have tried ExtJS ever before, and the book is too short for my taste. I do like the cook-book like steps in how to install ExtJS. And I like the usage of the browsers' debugger to inspect code and DOM. The proof-reading of the book could be better. On page 9, the different edition of ExtJS is mentioned twice, and the example of page 35 has an extra </tpl> tag. If you have another web framework (jQuery UI, YUI, etc.), you will find the book useful. It will give you a clear idea of what ExtJS is all about in a short time. The author emphases early that the strength of ExtJS is it's components. The major components (layout, containers, data, templates, and forms) are discussed. In particular, the layout and data components are nicely explained. You can find the book here.


Review: Learning JavaScriptMVC

I haven't been developing front-end code in JavaScript for some time. Well, to be more precise: years. Most of my JavaScript code these days is unit tests for a node.js extension, I'm maintaining at work. Recently, a book on JavaScriptMVC by Wojceich Bednanski was published by Packt. I was curious to see how contemporary web applications are written so I picked up the book. It is a short book - only 124 pages. In the preface, the author sets the requirement of the reader but I think he underestimate what it takes to read the book. In my opinion, the reader is an experience software developer with a sound knowledge of JavaScript, jQuery, and HTML. If the reader hasn't read "the good parts", I would recommend her to do so before reading this book. Furthermore, the reader should not be afraid of the Linux command line. Today, most software developers are trained in object-oriented modeling and design, and the author does assume so. Through-out most of the book, the reader sees how to develop a simple TODO manager. As TODO managers generally work with dates, the book has a number of assuming dates. On page 21, one sees a task due 1st December 1012. The approach of the book is to begin by the basic elements and gradually move to more advanced components. If the book has been much longer, I believe that many readers will be lost by this approach. To be fair, a complete example is introduced in the beginning of the book but explanations are coming in later chapters, the reader is left a bit frustrated. Chapter 2 and 3 are about topics which are invisible to the user of an application: documentation and testing. I agree with the author that these topics are important but I would have preferred them later in the book. Chapter 4 is about how to organize an application. It seems strange that large portion of the example code is commented out on page 51. Moreover, some of the plugins are discussed so briefly that the reader has no clue if there are useful or when to use them. One of the toughest part of a JavaScript application is to load the required libraries - and in the right order. In my dark past as front-end developer, I wrote a small library for loading libraries. Actually, it is just an excuse to write a graph class in JavaScript. Chapter 5 shows how to do load depending library using JavaScriptMVC. Unfortunately, the author does not explain in details how it works, and the examples are so simple that they have limited value. I like that the author introduces a complex framework like JavaScriptMVC is a very short book. But I was at times a little confused: does the author only show me the simple solution or best practises? If the publisher has asked for more pages, the author might have had a chance to get deeper into the subject. You can find the book here.


Map/Reduce and GNU Parallel

Map/Reduce and GNU Parallel

This week I attended a meeting organized by DKUUG.The topic was GNU Parallel and the speaker was Ole Tange - the developer behind GNU Parallel.

To be honest, I have not used GNU Parallel before. Of course, I have heard about it as Ole always talks about the program when I meet him. His introduction to the program was great - enjoy it when DKUUG releases the video.

Simply put, GNU Parallel is able to run tasks in parallel. It can either be running locally or remotely. In order words, GNU Parallel can help you to transform your command-line into a parallel computational engine.

Lately, I have been studying Apache Hadoop. Currently, Hadoop is probably the most popular implementation of the programming paradigm Map/Reduce. GNU Parallel offers a way of specifying the Map component. It is activated by using the --pipe option. On my way home I was thinking on how to implement a simply Map/Reduce based analysis using GNU Parallel.

I have used the On Time Performance data set more than once. It is a good data set as it is highly regular and it is large (5-600,000 rows every month). The data set records every flight within USA, and you can find information about destination airport, delays, and 120 other data points. 6 months of data will result in a 1.3 GB (comma separated value) file.

A simple analysis of the data set is to generate a table of the number of time an airport is used by a flight. The three-letter airport code is unique e.g., LAX is Los Angeles International Airport. It is possible do the analysis in parallel by breaking the data file into smaller parts. This is the map task. Each task will produce a table, and the reduce task will combine the output for each map task into the final table.

In order to use GNU Parallel as driver for Map/Reduce, I have implemented the mapper and reduce in Perl. The mapper is:

#!/usr/bin/perl -w

use strict;

my %dests;
while (<>) {
    my @data = split /,/;
    my $airport = $data[14];
    $dests{$airport} = 0 if (not exists $dests{$airport});

foreach my $airport (keys %dests) {
    printf "$airport $dests{$airport}\n";

The reducer is also simple:

#!/usr/bin/perl -w

use strict;

my %dests;
while (<>) {
    my ($airport, $count) = split / /;
    $dests{$airport} = 0 if (not exists $dests{$airport});
    $dests{$airport} += $count;

my $total = 0;
foreach my $airport (sort keys %dests) {
    $total += $dests{$airport};
    print "$airport $dests{$airport}\n";
print "Total: $total\n";

It is possible to run the Map/Reduce analysis by the command-line:

cat On_Time_Performance_1H2012.csv | parallel --pipe --blocksize 64M ./map.pl | ./reduce.pl

The input file is broken down into 64 MB chunks. GNU Parallel is line oriented so a chunk will not be exactly 64 MB but close. My laptop has four cores, and they are fully utilized.

It seems to me that GNU Parallel offers a simple approach to Map/Reduce for people living much of their life on a command-line.


GotoCon 2012

GotoCon 2012 in Århus

I attended GotoCon 2012 in Århus last week. To be more precise, I attended the Big Data track Monday. My educational background is somewhat related to supercomputing (yeah, I did my share of Fortran-77 programming as a graduate student), and I have over the years worked on various "Big Data" project: bioinformatics, credit card fraud detection, building Linux clusters, etc. Currently, I work for a small NoSQL database vendor, and our solution could fit the Big Data quite nicely. Given all this, going to Århus and spend a day listening to the open source Big Data vendors sounded like a treat.

Neo4j - Jim Webber

First speaker was Jim Webber from Neo4j. Neo4j is a graph database, and I like graph theory a lot. Jim is not just a speaker - he is an entertainer. He began his talk by looking back on the database history. As you might be aware of, relational databases were developed in 1970s, and through the 1980s, Oracle became The Database. In the last decade, a number of companies has gone "google scale" that is, they have vast amount of data. Handling big data requires different data models than relational databases.
Jim pointed out that the complexity of an application is a function of data size, connectness, and uniformity. Of course, his graph database can help you if your data is highly connected. He demonstrated a number of data sets and how to query them using Neo4j.

Couchbase - Chris Andersen

Chris noted that the database industry is changing rapidly these years. Companies are required to shrink and grow databases on demand. In particular in the mobile app world where an app can go virtal over night. As an example, he mentioned Instagram who gained about one million new users on one day when they released their Android app.
He boils down the requirements to:
  • grow and shrink databases on demand
  • easy cluster node deployment
  • multiple masters in the cluster (for availability)
  • multiple datacenters (for availability)
  • auto sharding
The old-school approach to scalability of the data layer is to sharded MySQL instances and memcached. But this approach typically requires some high availability handling in the application code. With Couchbase (and probably other NoSQL databases), the applications are free of this complexity.

Couchbase has conducted a survey among NoSQL users, and the flexible schemas are reported as the most important feature.

Cassandra - Matthew Dennis

Matthew began his talk by discussing the relevance of Big Data. According to studies, companies using Big Data products and techniques seem to be out perform companies not using Big Data. Cassandra is offering a master-free cluster database, and it scales well. When data sets are larger than physical memory, it slows down gracefully. A personal note: data sets smaller than physical memory are not big data.
According to Matthew Solid State Disks (SSD) are pushing the limit as they lower the latency. Cassandra is SSD aware: it does not overwrite data but only append data. Files (probably, its blocks) are reclaimed by the database when they are not used any more. Latency is important for Matthew, and he argues that linear scalability means that latency is constant when:
  • increasing throughput and increasing number of nodes
  • increasing load and increasing nodes
The nodes in a Cassandra cluster are equal, and queries are proxies from node to node. This implies that Cassandra is eventually consistent.

MongoDB - Alvin Richards

Last speaker in the track was about MongoDB. Alvin talked about scaling up without spending (too much) money and that is where the NoSQL revolution fits in. In the last 40 years, we have seen a dramatic growth in computing power for example, memory has increased from 1 Kb to 4 GB, and disk space has gone from 100 MB to 3 TB.
For MongoDB, availability is some important than consistency. Together with the CAP theorem, this gives some consequences for the architectural choices for MongoDB. The asynchronous replication within a MongoDB cluster implies that MongoDB is eventually consistent.
Storing and analyzing time series is an important use case for NoSQL.

Panel discussion

The organizers has arrange a panel discussion with the major NoSQL vendors - and Martin Fowler. Martin has just published a book about NoSQL. I have read it - and I will return with a review in the near future.
Martin noted that NoSQL is still inmature. But the good news is that NoSQL brings us past "one model fits all". It is worth noticing that most NoSQL databases are released under an open source license - and backed by companies.
The panellists agreed that the CAP theorem and the application should drive which database to choose. The major issue here is that the majority of software developers only read 1 computer science book per year.