March 26, 2017

Selenium + Java: How to get started

Why is Selenium Webdriver with Java a great choice for web service integration testing?

Selenium Webdriver is an amazing tool allowing you to interact with browsers installed on your system. It allows you to create test scenarios like “open this page – check if an element is visible – click on this button – check what happened by examining the DOM”. This testing approach is known as behavior-driven development (BDD). In other words – you write a separate test program which replaces the manual tests you would have performed during the development process.

Selenium is a tool that controls a web browser. To write test scenarios you can use a programming language like JavaScript or Java. A full list of supported languages can be found on this page.

Why Java? Java is one of the most popular programming languages. The community has developed many libraries for interacting with databases, web servers, file servers, Cloud Services and other resources you may need to access from your test application. This plethora of libraries will allow you to create powerful test scenarios .

Continue reading

Advertisement
Tags: , ,
December 22, 2014

How to Log/Drop TCP packets on the Linux kernel level

Most companies requiring sophisticated manipulation of network traffic rely on firewall devices capable of layer 7 protocol analysis. However, with the ease of deploying web applications into the cloud, you may find yourself managing a web server that is not sitting behind a firewall and exposed to hackers to play with. Most modern web services allow for access to be controlled via configuration (e.g. .htaccess for apache), but suppose you want to prevent certain packets from hitting your web service in the first place, that is where iptables comes in.

I advice caution when using this and you’ll need to use sudo or run as root to see try what I am describing below.

If you want to see what rules are currently in place you may run it with the -L parameter:

iptables -vL

Suppose you want to drop all incoming packets on port 80 containing the string ‘sex’, here is what you can do:

iptables -A INPUT -p tcp --dport 80 -m string --algo=bm --string "sex" -j DROP

There you can also log packets using -j LOG

If you decide that you no longer want a certain rule to be applied, use the -D option

iptables -D INPUT x # where x is the rule number

Finally, if you want your changes to persist across reboots you need to save your changes:

service iptables save

This is just a brief overview. I recently used this technique to block a WordPress comments spam bot (it identified itself in request headers which was very helpful).

Tags: ,
November 14, 2014

Exporting the NetSuite tax tables in a magento-friendly format

Did you know you can export the NetSuite tax tales in magento format? NO? READ ON!

In order to accomplish this task, you need to create a server-side script in NetSuite and ‘deploy’ it to its own url. The server-side NetSuite script will have access to Netsuite’s back-end, thus enabling you to access the tax information.

But first, a note on server side Javascript: If you have always relied heavily on console.log() or (when feeling dramatic) alert(), you’ll have to re-adjust.  Those arrows ain’t in your quiver when in server-side land. In this particular example , we are generating HTML, so testing and seeing errors isn’t hard. But if you are dealing with other suitescript scripts, ones that don’t generate page content, I recommend you keep an eye on nlapiLogExecution. It is extremely handy. You can also use the SuiteScript Debugger.

Netsuite Configuration

Before you start the code, which is the main part of this blog post you need to set up a new customized script and deploy it. I’m assuming that most people reading this know how to do that, but just in case you don’t, I’ll short-hand it:

Create a new server-side script:

Start by saving a .js file (call it “tax_export.js”) somewhere on your hard drive.

Log into NetSuite, then go to: customization->scripting->scripts. Click “New Script”, under “Select Type”, click ‘suitelet’.

Once there, name the script “tax_export”, click the dropdown arrow in the ‘script file’ input window and click “new” (this can also be accomplished by clicking the plus sign that appears when hovering over the “script file” input window). A pop up will appear. Click ‘Choose File” from the “Select File” input panel and navigate to the .js file you made earler (we called it “tax_export.js”).

Once done, click the “save” button on the pop up panel. The pop up should go away.

Back in the main window, type the word “suitelet” in the function input window. The value of this input window needs to match the function name on the .js you uploaded.  Make sure that if you change the function name in tax_export.js (which we haven’t added yet), you also change the function name in this input text field.

Deploy the NetSuite script:

Just in case you’re somewhere else in NetSuite, follow this chain again: customization->scripting->scripts. You should see your script in the list on the page. If you don’t, expand the “filters” view by clicking the plus sign next to the word “FILTERS” and select “suitelet” from the “type” dropdown. It should be there now.

Click “view” next to your script (which, should be called “tax_export”), and then, when in the script window, click “Deploy Script”. Welcome to the “Script Deployment” page!

A few more settings to cover. Click “deployed” and “available without login”.  You’re going to have to make a judgement call with some of the other options. Put your cursor over “Log Level” and check out the options to see which is right for you. Typicall “Error” is the right level but you may want to set “Debug” while developing to see your debug output. Do the same with “Execute As Role” (I used “Administrator”), and in the “Audience” window use your judgment. I clicked “Select All” next to “Roles”.

Then click “Save”. We are really close to actual coding, thanks for reading on!

Now, to find the location of the deployed script:

Once again, go: customization->scripting->scripts. Click “Deployments” next to the name of your script (“tax_export”), and then click “view” next to the deployment.

And there, under “External URL” is the product of all the work so far. That is going to be the address of your tax export. So copy/paste it into a new browser window, bookmark it, make love to it, slap a ring on it and tell it you’ll never leave it, because you have worked for it!

Coding time!

Well, almost. I just want to show you where you’re going to paste the script once done: Yet again: go: customization->scripting->scripts. Click ‘view’ by your script (“tax_export”), and then click “Edit” next to the Script File name (“tax_export.js”). This is where you’ll cut and paste your code when done.

Okay, ready for some coding??!! Excellent — open up tax_export.js on your computer. Add the following code:


function suitelet(request, response){
    if (request.getMethod() == 'GET') {
        //this is where we'll put our code
    }
}

This is the NetSuite (suitelet) framework for the code we’re going to write. The parameters are pretty obvious: “request” is that method (get or post) of the data coming in the page, “response” is the output of the function to the page.

Now let’s get to the code. First a simplified version:


function suitelet(request, response) {
    if (request.getMethod() == 'GET') {
        var searchresults = nlapiSearchRecord('taxgroup', null, [], []);

        if (searchresults != null && searchresults != '') {
            response.write('
                <table>
                <tbody>');
            response.write('
                <tr>
                <th>county</th>
                <th>US</th>
                <th>CA</th>
                <th>zip code(s)</th>
                <th>tax rates</th>
                <td>default</td>
                </tr>
                ');

            //loop through search results
            for (var key in searchresults) {
                if (searchresults[key]) {
                    //pull the individual records from the search results:
                    var order = nlapiLoadRecord('taxgroup', searchresults[key].getId());
                    var zipString = order.getFieldValue(['zip']);
                    var rateString = order.getFieldValue(['rate']);
                    var countyString = order.getFieldValue(['county']);
                    if (countyString) countyString = countyString.replace(/\s/g, '_');
                    response.write('
                        <tr>
                        <td>US-CA-' + countyString + '</td>
                        <td>US</td>
                        <td>CA</td>
                        <td>' + newZipString + '</td>
                        <td> ' + rateString + '</td>
                        <td>default</td>
                        </tr>
                        ');
                }
            }
            response.write('</tbody>
                </table>
                ');
        } else {
            response.write(' no search results exists.');
        }
    } else {
        response.write('Error: This page can only be accessed via GET method.');
    }
}

Our first new line of code invokes the ‘nlapiSearchRecord’ method of suitescript. This method, takes a record type as the first parameter. Tax tables are stored in an object of type ‘taxgroups’. At the end of this post, there are links to the suitescript reference docs for you to explore the language further, if you want. But for now, that method will return an object (which we’re cleverly calling “searchResults”) that contains the search results. In this case, a collection of all the tax group data.

We then iterate through the search results object, and pull out the desired data from each of the records. For each record, we verify its not null (perhaps redundant) and assign fields of interest to local variables. Our goal here is to replicate the format of the magento tax tables, so this should get us the data we need.

The remainder of code construct HTML for a table that will render tax data.

If you load the URL you copied earlier, you should now see some data. If the page fails to load, you may choose to break out of the loop after 50 records.

Magento Format

There’s more because we really haven’t accomplished our goal yet. The code above gives us the data that magento needs, but it isn’t in the magento format.

The magento format allows for grouping consecutive zip codes with the same tax rate into a range (e.g. 92121-92127 will represent all zip codes in that range), allowing you to store fewer unique entries for a given tax amount.

I am assuming you know JavaScript, so I won’t explain the code below in great detail.

One other thing about the code below:

There are A LOT of records, so I split them up into pages of 50 records each. Feel free to change the ‘increment’ variable in the code below in order to alter this.

function suitelet(request, response) {
    if (request.getMethod() == 'GET') {
        response.write('<style>table#headTable{width:600px;text-align:center; margin:10px auto;}a {text-decoration:none;}button {width:200px; margin:5px;height:50px;text-decoration:none;}table#data {border-collapse: collapse;width:1000px;margin:5px auto;} td, th {word-break: break-all;padding:5px;border:1px solid #000000;}}</style>');
        var increment = 50;

        //build the current url string
        var currentURL = request.getURL() + '?';
        var params = request.getAllParameters();
        for (var key in params) {
            if (key != 'startRecord') currentURL += key + '=' + params[key] + '&';
        }
        currentURL = currentURL.substring(0, currentURL.length - 1);
        var startRecord = (request.getParameter('startRecord')) ? Number(request.getParameter('startRecord')) : 0;
        var endNumber = Number(startRecord + increment);
        var previousNumber = Number(startRecord - increment);
        var endRecord = (request.getParameter('startRecord')) ? endNumber : increment;
        var searchresults = nlapiSearchRecord('taxgroup', null, [], []);
        endNumber = (endNumber > searchresults.length) ? searchresults.length : endNumber;

        response.write('
            <table id='headTable'>
            <tbody>
            <tr>
            <th>showing records ' + startRecord + ' to ' + endNumber + ':</th>
            </tr>
            <tr>
            <td>');
        if (startRecord > 0) response.write('<a href='' + currentURL + '&startRecord=' + previousNumber + ''><button>previous page</button></a>');
        if (endRecord < searchresults.length) response.write('<a href='' + currentURL + '&startRecord=' + endRecord + ''><button>next page</button></a>');

        response.write('</td>
            </tr>
            </tbody>
            </table>
            ');
        response.write('
            <table id='data' border='0'>
            <tbody>
            <tr>
            <th>Code</th>
            <th>Country</th>
            <th>State</th>
            <th>Zip/Post Code</th>
            <th>Rate</th>
            <th>Zip/Post is Range</th>
            <th>Range From</th>
            <th>Range To</th>
            <th>default</th>
            </tr>
            ');

        if (searchresults != null && searchresults != '') {
            //loop through search results
            for (var key = startRecord; key <= endRecord; key++) {

                if (searchresults[key]) {
                    //pull the individual records from the search results:
                    var order = nlapiLoadRecord('taxgroup', searchresults[key].getId());

                    var zipString = order.getFieldValue(['zip']);
                    var rateString = order.getFieldValue(['rate']);

                    var countyString = order.getFieldValue(['county']);
                    if (countyString) countyString = countyString.replace(/\s/g, '_');

                    if (zipString && zipString.indexOf(',') !== -1) {
                        //if so explode it into array
                        var zipArray = zipString.split(',');
                        var initialValue;
                        var rangeFrom;
                        var rangeTo;
                        var counter = 0;
                        var newZipString = '';
                        //loop through array and see if each one is the previous one plus one
                        for (var i = 0; i <= zipArray.length; i++) {
                            if (zipArray[i] != undefined) {
                                if (!initialValue) {
                                    newZipString = ''
                                    initialValue = zipArray[i];
                                    newZipString += initialValue;
                                    rangeFrom = initialValue;
                                    counter = 0;
                                }
                                ;

                                var j = i + 1;
                                var plusOne = Number(zipArray[i]) + 1;
                                if (plusOne == Number(zipArray[j])) {

                                    var dash = (newZipString.charAt(newZipString.length - 1) != '-') ? '-' : '';
                                    newZipString += dash;
                                    counter++;

                                }
                                else {

                                    var comma = (newZipString.charAt(newZipString.length - 1) != '-') ? ' , ' : '';
                                    if (newZipString.indexOf(zipArray[i]) == -1) {

                                        newZipString += comma + zipArray[i];
                                        rangeTo = zipArray[i];
                                    }
                                    var isRange = (counter > 0) ? '1' : '';
                                    if (isRange == '') {
                                        rangeFrom = '';
                                        rangeTo = ''
                                    }
                                    response.write('
                                        <tr>
                                        <td>US-CA-' + countyString + '</td>
                                        <td>US</td>
                                        <td>CA</td>
                                        <td>' + newZipString + '</td>
                                        <td> ' + rateString + '</td>
                                        <td>' + isRange + '</td>
                                        <td>' + rangeFrom + '</td>
                                        <td>' + rangeTo + '</td>
                                        <td>default</td>
                                        </tr>
                                        ');

                                    initialValue = null;
                                }
                            }
                        }
                    }
                }
            }
        } else {
            response.write(' no search results exists.');
        }
    } else {
        response.write('Error: This page cannot be accessed directly.');
    }
}

And there you have it! Some cool code that exports the NetSuite tax tables, puts it into Magento format and only gives you 50 records per page! With a little more work, you could build a Magento module that pulls this data periodically and updates Magento’s tables, but that’s a topic for another day.

Further study

As promised, here are some SuiteScript links:

June 27, 2014

Cleaning up after disabling Azure SQL Data Sync

If you are a SQL Server enthusiast like me, you were probably excited when Microsoft introduced Azure SQL Data Sync. It allows you to setup synchronization between two databases with a simple point & click interface.

What I liked about it:

  • You can synchronize in multiple directions and your databases are available for use while synchronization is running (Options available: Bi-directional, Sync to the Hub or Sync from the Hub)
  • No need to change your model to accommodate any special requirements – it just works with SQL Azure compatible databases
  • For small databases you can be up and running within minutes

What I didn’t like:

  • Synchronization is tightly bound to the schema at the time you created your SQL Data Sync Group’s Sync Rules (I know, a mouthful, right?). Anytime you make schema changes you have to update your synchronization configuration. Look under “Sync Rules” for this. Don’t forget to Refresh your Schema every time you use this tool as it seems to cache your schema. Note that this will only replicate the schema for your tables. You have to manually synchronize other objects (views, stored procedures, functions, etc).
  • Synchronization is performed by an agent that runs periodically. The lowest frequency available is 5 minutes. Feature request: Allow for a continuous option to sync changes as they happen.
  • SQL Data Sync adds quite a few objects in your database (the purpose of this article). A new schema will be added to your database – DataSync. With supporting objects. Based on my experience, at least 11 stored procedures, 3 triggers and 1 type is created for every table in your database.
  • You can only synchronize within the same Azure account. I had a use case that necessitated doing this across two accounts (one in China and one in the US) and was not able to use this technology.

OK, so you tried it out and decided its not for you. Disabling it is simple – Delete the SQL Data Sync Group and you’re done, right?

Almost… Say you didn’t do that, but simply dropped one of the databases or for some reason the group deletion failed to fully cleanup your database. In either case, you’ll be left with quite a few objects in your database. In my testing, as of 6-27-2014, when you delete the sync group, you’re still left with 4 tables and the schema “DataSync” isn’t dropped, this is probably a bug that will be fixed eventually, but in the meantime below are some scripts to help you out.

This is how you can find all the objects created by DataSync:

-- Find all SQL Data Sync triggers
select name from sys.triggers where name like '%_dss_%_trigger';
-- Find all SQL Data Sync tables
select table_schema + '.' + table_name from information_schema.tables where table_schema='DataSync'
-- Find all SQL Data Sync procedures
select 'DataSync.' + name from sys.procedures where schema_id = SCHEMA_ID('DataSync')
-- Find all SQL Data Sync types
select name from sys.types where schema_id = SCHEMA_ID('DataSync')

Note that in the case of triggers, I am relying on the current naming convention. Be careful that this doesn’t match any of your triggers. If it does, exclude them by adding additional where criteria. The rest should be safe, assuming you haven’t created any of your own objects under the DataSync schema.

This script will generate the drop statements you need:

-- Hit CTRL-T for "Results to Text"
GO
set nocount on
-- Generate drop scripts for all SQL Data Sync triggers
select 'drop trigger [' + name + ']' from sys.triggers where name like '%_dss_%_trigger';
-- Generate drop scripts for all SQL Data Sync tables
select 'drop table ' + table_schema + '.[' + table_name + ']' from information_schema.tables where table_schema='DataSync'
-- Generate drop scripts for all SQL Data Sync procedures
select 'drop procedure DataSync.[' + name + ']' from sys.procedures where schema_id = SCHEMA_ID('DataSync')
-- Generate drop scripts for all SQL Data Sync types
select 'drop type DataSync.[' + name + ']' from sys.types where schema_id = SCHEMA_ID('DataSync')
-- COPY output and run in a separate connection

Finally, drop the schema:

-- Once you run the drop scripts above, drop the schema
drop schema DataSync

You are done!

Here is the complete script: azure_sql_data_sync_clean.sql

October 3, 2012

View Android Application memory usage with Eclipse DDMS plugin

While the Android runtime (Dalvik) is garbage collected, it is important to be mindful of your
application’s memory usage since mobile devices are rather memory constrained. Using too much
memory can lead to excessive garbage collection that degrades performance. In extreme cases, you
may run in to the dreaded OutOfMemoryError and application crash.

There are two main memory analysis tools available in the Android SDK; the Allocation Tracker and
the ability to dump the heap. The Allocation Tracker is useful if you want to understand what kinds
of allocations are happening within a given period of time while heap dumps provide a much finer
grained view of exactly what is utilizing memory for your application.

The Allocation Tracker is a pretty useful tool for identifying cases where critical code paths may
benefit from moving allocations to a higher scope to reduce performance costs associated with
garbage collection.

To use the Allocation Tracker, start your application by selecting Run -> Debug As -> Android Application
in Eclipse. Then select your device. Once the application is running, open the DDMS perspective by
selecting Window -> Open Perspective -> DDMS.

Once the DDMS perspective is open, select the Allocation Tracker tab and click on the Start Tracking
button when you are ready to start. Once tracking is started, run through the feature(s) of the
application you wish to track and then click the Get Allocations button. This should result in a
list of all of the allocations that occured since you clicked the Start Tracking button.

Clicking on any of the items in the list will present a callstack so you can pinpoint the exact line
of code that is responsible for the allocation.

Going through the list, you should consider any duplicate entries as cases that should be reviewed
to determine if there is a benefit to refactoring the code in order to prevent multiple allocations.

The DDMS also has a heap tab that is extremely useful in identifying cases where your application may
be keeping a reference to an object that is no longer needed. For the lack of a better term, we’ll
identify these as memory leaks, but they should not be considered to be memory leaks like in c and
c++.

Once you have selected the heap tab in the DDMS perspective, you will see a list of applications
per device that is attached. This should be located on the left hand side and is inside of a
navigation tab labeled ‘Devices’. Click on the application you wish to view and then click on the
‘update heap’ button in the devices tab. If you have trouble finding the ‘update heap’ button, it’s
the second icon from the left.

Heap output will be displayed after each garbage collection. You can force a garbage collection by
clicking on the ‘Cause GC’ button. The result should be a high level statistical summary of the heap
usage. At the very least, you should take note of the Allocated memory.

At this point, it is recommended to run through your application several times while periodically
forcing a garbage collection and updating the heap statistics. Keeping an eye on the allocated memory
in the statistical summary is a good idea.

If you do notice that the Allocated memory keeps increasing while running through the application,
you most likely have a case where references are being held to objects that are no longer needed. If
you do run into this problem, I would recommend using the Memory Analyzer Tool.

The Memory Analyzer Tool is a tool for analyzing heap dumps. A stand alone version of the tool can
be downloaded from eclipse.org.

To create a heap dump, click on the ‘Dump HPROF file’ icon which is in the devices tab just to the
right of the ‘Update Heap’ icon and then follow the prompt to save the file. Note that it may take
several seconds for the file prompt to pop up.

The resulting file is in a different format than what the Memory Analyzer Tool can read, so it will
need to be converted with the hprof-conv tool that can be found in your Android SDK tools folder.
Simply run hprof-conv with the first parameter being the existing heap dump file and the second
parameter the name you wish the converted heap dump file to have. On windows it would be

hprof-conv.exe myApp.hprof converted-myApp.hprof

Once the file is converted, start the Memory Analyzer Tool and open using the File / Open menu item.
This should result in a popup window that gives options for different types of reports. I suggest
starting with the ‘Leak Suspects Report’; which is selected by default. Clicking the finish button
will display the complete report for the heap.

Explaining all of the features of the Memory Analyzer Tool is beyond the scope of this post, but I
would recommend starting with the list of suspect areas that is given, followed by using the
Histogram feature.

Complete documentation for the Memory Analyzer Tool can be found at wiki.eclipse.org

September 25, 2012

ASP.NET MVC with Ajax calls to controller actions

Recently I was trouble shooting an ASP.NET MVC project which relies on some Ajax calls to controller
actions. The project has a view in which there are a couple of drop down lists which trigger an
action result as soon as an item is selected in each list.

The view has a drop down list defined as follows:

@Html.DropDownList("searchTemplate",
new SelectList(Model.DefaultSearchTemplates, "SearchFilterTemplateId", "Name"), "Choose")

And a change event triggers the action result:

$("select#searchTemplate").change(function () {
    var selectedTemplate = $("select#searchTemplate").val();
    
    if (selectedTemplate) {
	$.ajax({
	    url: '@Url.Action("ApplySearchTemplate","Quote")',
	    data: { templateId: selectedTemplate }
	});
    }
});

The functionality of the Action isn’t important for this post, but it essentially saves a users
last search criteria so that the next time they log in, they will see the same results.

The problem with the application was that the selection of a search template only worked once. And
on further inspection, it turned out that the controller action was not being called at all after
the first time.

This was an interesting problem because the change event was certainly being called every time that
an item was selected in the list. My initial thought was that there must be some bug in ajax itself,
but it did seem fairly unlikely that such a bug would have gone un-noticed.

As expected, after digging through the ajax source, the problem became apparant; user error. Ajax
caches requests by default and even though these particular calls are not returning values nor
expecting results; the requests are still cached.

Since the requests were cached, each subsequent selection in the dropdown list resulted in ajax
“ignoring” the request since it “already knew the results”.

Once I understood what was happening, it was just a matter of disabling ajax caching for the view.
There are two ways to disable the cache; the first is to disable it individually for each request:

$("select#searchTemplate").change(function () {
    var selectedTemplate = $("select#searchTemplate").val();
    
    if (selectedTemplate) {
	$.ajax({
	    url: '@Url.Action("ApplySearchTemplate","Quote")',
	    data: { templateId: selectedTemplate },
	    cache: false
	});
    }
});

The second option is to globally disable ajax caching using ajaxSetup:

$.ajaxSetup({ cache: false });

Since this particular view has multiple similar calls to controller actions, setting it globally was
the solution that was chosen.

September 10, 2012

MS SQL Server driver for Node.js with JSON response

We covered installing and running the MS SQL driver for Node.js in the last post. The demonstration
was a simple web page that listed statically queried results from a database. For this post, we’ll
be demonstrating how to serve the results in JSON; which is a more practicle usage of the MS SQL
driver.

In the previous example, we utilized the built in http server to serve up the static web page and
the query was run once when the server was launched. This time we’ll create a RESTful service and
query the database each time there is a request.

Restify is a Node.js module for creating REST web services. To install it, open a command prompt in
the folder that you wish to create the demo and type the following:

npm install restify

Once restify is installed, you can create a basic server and run it to verify that the installation
was successful. Create a text file named server.js and copy the next block of code.

var restify = require('restify');
var server = restify.createServer();

server.listen(8080, function () {
    console.log('listening at %s', server.url);
});

Run the server with:

node server.js

If Restify was installed correctly, you should see ‘listening at http://0.0.0.0:8080&#8217; in the console.
You can hit the page from your browser at this time, but you will get an error result similar to the
following:

{"code":"ResourceNotFound","message":"/ not found"}

This is expected behavior and it simply means that a route for the requested page does not exist. To
define a route, you will need to set up the “GET” callback.

server.get('/people', respond);

The first parameter is the route and the second parameter is the function that provides the response.
In this example, we have defined a route so that requests can be made to the url of the server followed
by /people. For example, you should now be able run the server and browse to it with
http://localhost:8080/people once the ‘respond’ function is defined.

To serve the JSON response, the respond function will query the database and put the results into an
array object with key / value pairs. Once populated, the array object is sent as the response.

function respond(request, response, next) {
	response.header("content-type: application/json");
	
	var data = [];
	sql.open(conn_str, function (err, conn) {
	    if (err) {
		data = "cannot open connection!";
		return;
	    }

	    conn.queryRaw("SELECT TOP 10 FirstName, LastName FROM Person.Contact", function (err, results) {
		if (err) {
		    console.log("query failed!");
		    return;
		}
		for (var i = 0; i < results.rows.length; i++) {
			data.push({
			    firstname: results.rows[i][0], 
			    lastname: results.rows[i][1]
			});
		}
		
		response.send(data);
	    });
	});
};

It is important that you set the content-type to json. This may seem obvious, but it is worth noting
in order to prevent a few moments of head scratching :).

We now have Node ready to serve JSON responses with data from a MS SQL Server. The complete script
should look similar to the following:

var restify = require('restify');
var server = restify.createServer();
var sql = require('node-sqlserver');
var connection_string = "Driver={SQL Server Native Client 10.0};Server=YOUR_SERVER;Database=YOUR_DB;uid=YOUR_USER;pwd=YOUR_PASSWORD";

function respond(request, response, next) {
	response.header("content-type: application/json");
	
	var data = [];
	sql.open(connection_string, function (err, conn) {
	    if (err) {
		data = "cannot open connection!";
		return;
	    }

	    conn.queryRaw("SELECT TOP 10 FirstName, LastName FROM Person.Contact", function (err, results) {
		if (err) {
		    console.log("query failed!");
		    return;
		}
		for (var i = 0; i < results.rows.length; i++) {
			data.push({
			    firstname: results.rows[i][0], 
			    lastname: results.rows[i][1]
			});
		}
		
		response.send(data);
	    });
	});
};

server.listen(8080, function () {
    console.log('listening at %s', server.url);
});

server.get('/people', respond);

If you have an AdventureWorks database installed, you will just need to update the connection string
for this demo. If not, be sure to update the connection string as well as the actual query.

To run the server, use the following command at the command prompt in the folder where your script
is located.

node server.js

To view the data, you can browse to http://localhost:8080/people. I recommend using either Chrome or Firefox
to do so. If you use Chrome, I highly recommend the JSONView extension as it will present the data in a more readable fashion.

Tags:
September 5, 2012

SQL Server driver for Node.js

Node.js is an intriguing technology that has garnered a lot of attention since it was created by
Ryan Dahl in 2009. Built around Google’s V8 JavaScript engine, it provides programmers with the
ability to write programs written in JavaScript using event driven, asynchronous I/O to minimize
overhead.

Support for Node.js on Windows platforms was initially non-existent unless you consider running it
in cygwin as an acceptable definition of support. Over the past several months, things have come a
long way on that front with a native installer, Windows Azure support and documentation, and even
a Microsoft SQL Server driver.

The Microsoft SQL Server driver was particullarly interesting, so I thought that I would write a
quick tutorial on how to quickly install it and get it runnning. If you do not have Node.js installed
yet, grab the installer from their official website.

nodejs.org/download

Once installed, create a folder for your project called nodesql. Inside the folder, create and empty
text file called server.js. To ensure everything is working, add the obligatory ‘Hello World’ web
server code.

var http = require('http');
http.createServer(function (request, response) {
  response.writeHead(200, {'Content-Type': 'text/plain'});
  response.end('Hello World\n');
}).listen(8080);
console.log('Server running on port 8080');

To run the server, navigate to ~/nodesql in the command shell (cmd.exe) and type the following:

node server.js

You should see “Server running on port 8080” in the console window. Once verified, you can stop the
server with ctrl-c, but keep the console open for the next step.

To install the SQL Server driver, navigate to the nodesql folder (if you are not still there) and
issue the following command:

npm install node-sqlserver

This will create a folder called node_modules, download the source for the module and compile it.
Once complete, copy sqlserver.node from the node_modules/node-sqlserver/build/Release folder to the
node_modules/node-sqlserver/lib folder. This is an important step that will ensure that the module
can be found when running the server.

For this test, I created a simple page that queries the database and lists the results in a simple
web page. Note that the query is run as soon as the server is started. This isn’t really practicle,
but serves the purpose of this demonstration.

var http = require('http');
var sql = require('node-sqlserver');
var connection_string = "Driver={SQL Server Native Client 10.0};Server=YOUR_SERVER;Database=YOUR_DB;uid=YOUR_USER;pwd=YOUR_PASSWORD";

var data = "";
sql.open(connection_string, function (err, connection) {
    if (err) {
	data = "cannot open connection!";
	return;
    }
    connection.queryRaw("SELECT TOP 10 Name, EmailAddress FROM Customer", function (err, results) {
	if (err) {
	    data = "query failed!";
	    return;
	}
	for (var i = 0; i < results.rows.length; i++) {
	    data += "Name: " + results.rows[i][0] + " Email: " + results.rows[i][1] + "<br />";
	}
    });
});

http.createServer(function (req, res) {
	req.on('end', function() {
		res.writeHead(200, { 
		 'Content-Type': 'text/html'
		});
		res.end(data);
	});
}).listen(8080);
console.log('Server running on port 8080');

You will of course want to update the connection string with your information and update the query.

August 27, 2012

ASP.NET MVC3 with Razor view engine using Monodevelop and Ubuntu 12.04

Developing .Net web applications in a linux environment has been somewhat of a personal curiosity for quite some time. I have Ubuntu 12.04 installed in a dual boot configuration and every once in a while get an urge to actually boot Ubuntu and tinker with Monodevelop to see how far along the project has come. Since most of my time is spent developing ASP.NET MVC3 applications, I decided to see if it was possible to get a simple application running using the Razor view engine.

The last time I attempted this (over a year ago), it turned out to be more of a pain getting a web server configured to run ASP.NET applications than it was worth. The experience this time was much better as Monodevelop has xsp4 integrated out of the box for serving the web pages. xsp4 is a minimalistic web server which hosts the ASP.NET runtime.

To be honest, I was hoping that by now Monodevelop would have support for the Razor view engine and it would just work ‘out of the box’. This of course was just wishful thinking. However, the actual process to get it working isn’t a deal breaker anymore; especially after you have done it once.

To begin with, you will want to get the latest version of Monodevelop. The Ubuntu repository is a bit behind, so using a ppa is your best bet:

sudo add-apt-repository ppa:keks9n/monodevelop-latest
sudo apt-get update
sudo apt-get install monodevelop

This process will take a few minutes depending on your connection speed. Once installed, launch Monodevelop and click Start New Solution. Select ASP.NET MVC Project from the C# section and give your project a name and location.

Once the project is created, compile it and optionally run it. Compiling the project will create a bin folder with the relevant assemblies. If your project will not compile and the message refers to a .NET 3.5 compiler not being found, be sure to change your build to use .NET 4.0. Right click on your project (not the solution) and select options. Under build, click on ‘General’ and select ‘Mono/.NET 4.0’.

In the bin folder, there will be a System.Web.Mvc.dll. This is an ‘old’ version and will be replaced. The first thing to do at this point is remove that reference from your project. In the solution explorer of Monodevelop, expand the references and then right-click delete System.Web.Mvc.

The next steps require that you have some assemblies from a Windows compiled MVC 3 project. If you don’t have access to a Windows machine, just google for them. The assemblies that you will want to copy over to the bin folder are as follows:

  • System.Web.Helpers.dll
  • System.Web.Mvc.dll
  • System.Web.Razor.dll
  • System.Web.WebPages.dll
  • System.Web.WebPages.Deployment.dll
  • System.Web.WebPages.Razor.dll
  • Once you have copied them to the bin folder, add them as references to your project. To do this, right click ‘references’ in the solution explorer and select ‘edit references’. Click the .Net Assembly tab and double click the bin folder. Control-click all of the above dll’s and then click the ‘add’ button.

    Almost there. The project that was created by Monodevelop is using aspx pages. You will need to configure the project to use the Razor view engine. You can manually edit the Web.config to include razor, or be lazy like I was and just copy everything from a project created in Visual Studio. Here’s a complete Web.Config that you can copy and paste:

    <?xml version="1.0"?>
    
    <configuration>
      <configSections>
        <sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
          <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
          <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" />
        </sectionGroup>
      </configSections>
    
      <system.web.webPages.razor>
        <host factoryType="System.Web.Mvc.MvcWebRazorHostFactory, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" />
        <pages pageBaseType="System.Web.Mvc.WebViewPage">
          <namespaces>
            <add namespace="System.Web.Mvc" />
            <add namespace="System.Web.Mvc.Ajax" />
            <add namespace="System.Web.Mvc.Html" />
            <add namespace="System.Web.Routing" />
          </namespaces>
        </pages>
      </system.web.webPages.razor>
    
      <appSettings>
        <add key="webpages:Enabled" value="false" />
      </appSettings>
    
      <system.web>
        <httpHandlers>
          <add path="*" verb="*" type="System.Web.HttpNotFoundHandler"/>
        </httpHandlers>
    
        <!--
            Enabling request validation in view pages would cause validation to occur
            after the input has already been processed by the controller. By default
            MVC performs request validation before a controller processes the input.
            To change this behavior apply the ValidateInputAttribute to a
            controller or action.
        -->
        <pages
            validateRequest="false"
            pageParserFilterType="System.Web.Mvc.ViewTypeParserFilter, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
            pageBaseType="System.Web.Mvc.ViewPage, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"
            userControlBaseType="System.Web.Mvc.ViewUserControl, System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35">
          <controls>
            <add assembly="System.Web.Mvc, Version=3.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" namespace="System.Web.Mvc" tagPrefix="mvc" />
          </controls>
        </pages>
      </system.web>
    
      <system.webServer>
        <validation validateIntegratedModeConfiguration="false" />
    
        <handlers>
          <remove name="BlockViewHandler"/>
          <add name="BlockViewHandler" path="*" verb="*" preCondition="integratedMode" type="System.Web.HttpNotFoundHandler" />
        </handlers>
      </system.webServer>
    </configuration>
    

    It’s probably a good idea to copy Global.asax.cs for updated route configuration. This is the step where you will be glad to have copied System.Web.Mvc from a Windows build. Without it, you will not be able to compile the project because GlobalFilterCollection will not exist. Replace the complete class with the following:

        public class MvcApplication : System.Web.HttpApplication
        {
            public static void RegisterGlobalFilters(GlobalFilterCollection filters)
            {
                filters.Add(new HandleErrorAttribute());
            }
    
            public static void RegisterRoutes(RouteCollection routes)
            {
                routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
    
                routes.MapRoute(
                    "Default", // Route name
                    "{controller}/{action}/{id}", // URL with parameters
                    new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults
                );
    
            }
    
            protected void Application_Start()
            {
                AreaRegistration.RegisterAllAreas();
    
                RegisterGlobalFilters(GlobalFilters.Filters);
                RegisterRoutes(RouteTable.Routes);
            }
        }
    

    Finally, rename ~/Views/Home/Index.aspx to Index.cshtml and then replace its contents with the following:

    @{
        ViewBag.Title = "Home Page";
    }
    
    <h2>@ViewBag.Message</h2>
    <p>
        To learn more about ASP.NET MVC visit <a href="http://asp.net/mvc" title="ASP.NET MVC Website">http://asp.net/mvc</a>
    </p>
    

    If all went well, you can compile and run your project by hitting F5.

    August 21, 2012

    Introduction to data structures and algorithms in javascript: Binary Search Tree

    This is a continuation of a series introducing data structures in javascript. A binary tree is a tree
    based data structure in which each node has at most two child nodes. The child nodes are typically
    referred to as the left and right nodes. Nodes with children are called parent nodes. The absolute
    first node in the tree is referred to as the root node.

    A binary search tree is a binary tree that is organized with the following properties:

    – The left subtree of a node contains only nodes with keys that are less than the nodes key.
    – The right subtree of a node contains only nodes with keys that are greater than the nodes key.
    – Both the left and right subtrees must also be binary trees

    With the data inserted into the tree in this manner, searching becomes more effecient than within an array
    because traversal of the data structure can logically exclude elements for comparison. Traversing the structure
    from the root node, a greater than or less than check will eliminate half of the data to be compared; assuming
    that it is a perfectly balanced tree.

    Each node in a binary search tree is similar to a doubly linked list in that they contain some data as well
    as two pointers to other nodes. The key difference from a doubly linked list is that the nodes relate to
    one another.

    A javascript implementation of such a node would look like the following:

    var node = {
    	data: 17,
    	left: null,
    	right: null
    }
    

    The first step in building a binary search tree implementation is to define a custom type with a single property
    that represents the root node.

    function BinarySearchTree() {
    	this.root = null;
    }
    

    To insert a value into the tree you must traverse the tree using the rules that are defined earlier in this document.
    The one special case is when no root node exists; denoting that the node to be inserted is the root node.

    BinarySearchTree.prototype = {
        insert: function(data){
            var node = {
                    data: data,
                    left: null,
                    right: null
                };
    
            var currentNode;
    
            if (this.root === null){
                this.root = node;
            } else {
                currentNode = this.root;
    
                while(true){
                    if (data < currentNode.data){
                        if (currentNode.left === null){
                            currentNode.left = node;
                            break;
                        } else {
                            currentNode = currentNode.left;
                        }
                    } else if (data > currentNode.data){
                        if (currentNode.right === null){
                            currentNode.right = node;
                            break;
                        } else {
                            currentNode = currentNode.right;
                        }
                    } else {
                        break;
                    }
                }
            }
        },
    };
    

    Removal of a node from a binary search tree is can be a complex operation because the
    tree must remain balanced. This means that all values on the left must be less than
    all of the values on the right. There are two special cases to consider when removing
    a node as well; existence of the node must be checked as well as determination if the
    node to be removed is the root node.

    When removing a node, the number of children for that node must be taken into consideration
    since the operations become slightly different depending on the number. Removing a node with
    two children is the most complex.

    BinarySearchTree.prototype = {
        remove: function(data){
    
            var found = false;
            var parentNode = null;
            var currentNode = this.root;
            var childCount;
            var replacementNode;
            var replacementParent;
                
            while(!found && currentNode){
                if (data < currentNode.data){
                    parentNode = currentNode;
                    currentNode = currentNode.left;
                } else if (value > current.value){
                    parentNode = currentNode;
                    currentNode = currentNode.right;
                } else {
                    found = true;
                }
            }         
    
            if (found){
                childCount = (current.left !== null ? 1 : 0) + 
                             (current.right !== null ? 1 : 0);
    
                if (currentNode === this.root){
                    switch(childCount){
                        case 0:
                            this.root = null;
                            break;
                        case 1:
                            this.root = (currentNode.right === null ? 
                                          currentNode.left : currentNode.right);
                            break;
                        case 2:
                            replacementNode = this.root.left;
    
                            while (replacementNode.right !== null){
                                replacementParent = replacementNode;
                                replacementNode = replacementNode.right;
                            }
    
                            if (replacementParent !== null){
                                replacementParent.right = replacementNode.left;
    
                                replacementNode.right = this.root.right;
                                replacementNode.left = this.root.left;
                            } else {
                                replacementNode.right = this.root.right;
                            }
    
                            this.root = replacementNode;
                    }        
                } else {
                    switch (childCount){
                        case 0:
                            if (currentNode.data < parentNode.data){
                                parent.left = null;
                            } else {
                                parentNode.right = null;
                            }
                            break;
                        case 1:
                            if (currentNode.data < parentNode.data){
                                parentNode.left = (currentNode.left === null ? 
                                               currentNode.right : currentNode.left);
                            } else {
                                parentNode.right = (currentNode.left === null ? 
                                                currentNode.right : currentNode.left);
                            }
                            break;
                        case 2:
                            replacementNode = currentNode.left;
                            replacementParent = currentNode;
    
                            while(replacementNode.right !== null){
                                replacementParent = replacementNode;
                                replacementNode = replacementNode.right;
                            }
    
                            replacementParent.right = replacementNode.left;
    
                            replacementNode.right = currentNode.right;
                            replacementNode.left = currentNode.left;
    
                            if (currentNode.data < parentNode.data){
                                parentNode.left = replacementNode;
                            } else {
                                parentNode.right = replacementNode;
                            } 
                    }
                }
            }
        },
    };
    

    A generic method to traverse the array is useful to have for cases where you may want
    to convert the values in the tree to an array or a string.

    BinarySearchTree.prototype = {
        traverse: function(evaluate){
            function iterate(node){
                if (node){
                    if (node.left !== null){
                        iterate(node.left);
                    }            
    
                    evaluate.call(this, node);
    
                    if (node.right !== null){
                        iterate(node.right);
                    }
                }
            }
            iterate(this.root);
        },
        
        toArray: function(){
            var result = [];
    
            this.traverse(function(node){
                result.push(node.data);
            });
    
            return result;
        },
    
        toString: function(){
            return this.toArray().toString();
        },    
    };
    

    Below are some simple usage examples for this implementation of a binary
    search tree in javascript.

    var bst = new BinarySearchTree();
    bst.add(17);
    bst.add(11);
    bst.add(43);
    bst.add(9);
    bst.add(65);
    bst.remove(43);
    
    document.writeln(bst.toString()); // prints 9 11 17 65
    
    
    Tags: