Version 2.1 – Data Quality Automation

The hero feature of Ajilius 2.1 is Data Quality Automation.

This is yet another unique feature brought to data warehouse automation by Ajilius.

In the 2.1 release, Ajilius adds three types of data quality screens to the extract process:

  • Data type validation, where values are tested for conformance to the column data type.
  • Range validation, where values are tested for set and range boundaries.
  • Regex validation, where values are tested against regex regular expressions.

In Version 2.3 (due September 2016) we will be adding Lookup validation to data quality rules, to check the existence of values in data warehouse tables.

Rows breaking validation are logged to an error file, along with the reason/s for row rejection.

A new return code from the extract job signals that validation errors have occurred, enabling the scheduler to choose to continue or suspend the batch pending user remediation of the errors.

And once again, we’re adding this as a standard feature of the Ajilius platform. If you’re licensed for Ajilius, upgrade to the latest version and you can immediately identify and screen data quality problems before they hit your data warehouse.

Ajilius. Committed to innovation in data warehouse automation.

Version 2.1 – Pivotal Greenplum

greenplumAjilius is pleased to announce full support for Pivotal Greenplum in Version 2.1, available now.

Greenplum is an open source MPP data warehouse, available on-premise and cloud. Based on an earlier version of PostgreSQL, Greenplum will shortly be upgraded to the latest PostgreSQL code base for even faster loads and transformations.

The advantage of Greenplum is that, being open source, it gives anyone the opportunity to use a real MPP data warehouse platform. All you need is hardware, or cloud, with realisable savings of hundreds of thousands of dollars over commercial offerings.

You now have a great, free, scalability path when your workload or data grows to exceed PostgreSQL capabilities. With the unique “3 click migration” feature of Ajilius, you can move your entire data warehouse at any time with just a few clicks of the mouse.

Ajilius. Keeping the value in data warehouse automation.

Surrogate issues with Azure SQL DW

2017-02-14: Ajilius has a new CTAS engine in Release 2.4.0 that fully supports optimised surrogate keys across both PDW and Azure SQL Data Warehouse. We’d still like to see an IDENTITY column, or equivalent, on these platforms, but we’re processing hundreds of millions of rows using our current techniques and we’re satisfied with our solution.

Surrogate keys are fundamental to the success of a dimensional data warehouse. These are the keys that uniquely identify a dimension row. They are typically integer values, because they compress and compare at high performance.

We’ve been using window functions to handle surrogate key generation in Azure SQL Data Warehouse. This was the recommended approach on PDW, then APS, and has now been well documented in a recent paper from the SQL Server Customer Advisory Team.

On reading this paper, I was a little concerned to read the following comment:

NOTE: In SQL DW or (APS), the row_number function generally invokes broadcast data movement operations for dimension tables. This data movement cost is very high in SQL DW. For smaller increment of data, assigning surrogate key this way may work fine but for historical and large data loads this process may take a very long time. In some cases, it may not work due to tempdb size limitations. Our advice is to run the row_number function in smaller chunks of data.

I wasn’t so worried about performance issue in daily DW processing, but the tempdb issue had not occurred to me before. Is it serious? Maybe, maybe not. But having been identified as an issue, we need to do something about it.

We’re working with another vendor at the moment – not named due to NDA constraints – where we also face a restriction that the working set for window functions needs to fit in one node. That, too, is a potential problem when loading large and frequently changing dimensions.

In other words, the commonly recommended approach for surrogate key generation on at least two DW platforms introduces potential problems in larger data sets. Which are exactly the type of data sets with which we are working. It is time to look at alternative approaches.

We don’t face this problem on Redshift or Snowflake, because they both support automatically generated identifiers. Redshift uses syntax like ‘integer identity(0,1) primary key’, while Snowflake uses ‘integer autoincrement’. The two platforms we’re adding in the immediate up-coming releases of Ajilius also support this feature.

If Microsoft did what customers have been asking since the first releases of PDW, they’d give us identity or sequence columns in Azure SQL Data Warehouse. But since that isn’t happening right now, we’re looking at two options to replace the window method of creating surrogate keys. The first option is to create row IDs early in the extract pipeline, the second option is to use hash values, at extract or at the point of surrogate requirement.

Row IDs are attractive in a serial pipeline, but have some limitations when we want to run multiple extracts or streams in parallel as we face issues of overlapping IDs in a merged data set. The benefit of deriving surrogate keys from row IDs is that we would still have the benefits of an integer value.

Hash values are attractive because they can be highly parallel. Their weaknesses are their size and poor comparison performance, but also the risk of hash collision which could create the same surrogate value for different business keys.

We’re just wrapping up the testing for V2.1, resolving this question will be high on the priority list for our next release. Let us know your preferences and suggestions.

Data quality performance

inspectionWe’re introducing data quality screens in V2.1, to be released at the pgDayAsia conference in March. In this release, data quality screens implement data type, range and expression testing on selected tables and columns.

The last week has been spent running performance tests, to identify the overhead added by these screens.

On average, we currently process 500,000 values per second.

For example, if your daily fact extract has 10 million rows, and each row comprises 20 columns, you have a total of 200 million values to screen. At 500,000 values per second, a full data quality screen of every column would add around six minutes to your batch.

To put that number in scale, 10 million is roughly the number of bets placed at the TAB on Melbourne Cup day. Or around the number of sales transactions done by a major department store chain in one week. Or the total number of motor vehicles sold in one year, world-wide, by Toyota Motor Company. In other words, 10 million rows is a LOT of data, and we’re going to completely validate it with full data screening, guaranteeing that no bad data is loaded from every row and column in your extract, in less than 10 minutes.


And even though I wrote “Wow!” about that validation, we are working to make it even faster by release day.

Ajilius. Putting the quality in data warehouse automation.

Ajilius loves Snowflake

Snowflake HeartAlong with Amazon Redshift and Azure SQL Data Warehouse, Ajilius does cloud data warehouse automation for Snowflake Elastic Data Warehouse.

We don’t just support Snowflake, in a short space of time it has become a favourite cloud data warehouse platform. We had a great time working with the Snowflake team during the development of their Python adapter.

Here are some of the things that we love about Snowflake.


Our Snowflake development and demonstration platform cost around AUD300 per month, on a lousy exchange rate. We pay a monthly storage cost, then pay for just the processing we need, when we need it.

For the smallest development machines Redshift may be slightly cheaper, but once you scale to multi-terabyte production workloads the advantage shifts to Snowflake unless you are prepared to commit to three-year reserved instances.

Given the rate of change in the cloud data warehouse market, we believe that long-term commitments are not in the interests of most customers, and Snowflake has a price/performance advantage.

Microsoft Azure SQL Data Warehouse is still in preview, and we can’t comment on comparative pricing at this time.


Scaling our Snowflake platform takes just seconds. In comparison, we’ve seen cluster resizing on Amazon take many minutes, and we’ve seen it take longer in customer sites.

Snowflake instances can scale from 1 to 128 8-core nodes. That is a huge amount of compute power, making Snowflake suitable for workloads of any size. At the lower end, we see Snowflake as an ideal platform for mid-market customers as its entry point and pricing model is so flexible.

We do a lot of Ajilius work without incurring any processing costs. This is because DDL operations are performed on the database, not the warehouse (see Features), and we don’t need to start a warehouse until we start actually loading, selecting or modifying data. The majority of our development and test work is done on a single node, with occasional scaling for performance tests.


I’ve never had better support from a data warehouse company, especially when we were not known to the vendor, and not spending huge amounts of money. From sales, to pre-sales, to support, and even right into engineering, we’ve had amazing engagement from every level of the company.

Snowflake people respond to emails, pick up the phone, and respond to support requests with speed. We’ve never waited more than a couple of hours for a response to an issue, and that response has always been highly relevant, never of the type “have you unplugged and plugged in the keyboard” variety.

The Snowflake team is knowledgeable, enthusiastic, and committed to success.


One intriguing feature of Snowflake is its avoidance of distribution keys, partitions, etc., in the database. This avoids one of the big design challenges present in both Redshift and Azure, where the wrong distribution method can really damage your performance. One day I’ll have a beer with Snowflake’s designers and figure out how this works, but for now, all I know is that it works well.

Better described as “quirky” is Snowflake’s terminology of database and warehouse. A “database” is a collection of schemas and data. A “warehouse” is the compute configuration that works on databases, to me they’d have been better off using a name like “server”. A powerful feature is the ability for multiple “warehouses” to act on a “database”, with different configuration settings. For example, an ETL warehouse might use very high scale to compress the ELT time, while a Browse warehouse might run for a long time at low scale for refreshing data used by BI tools like Tableau and Qlik Sense.

Another feature we love is the Snowflake administrative console, where we can not only administer databases, warehouses and users; but also review performance history and execute ad-hoc queries. The user interface for the console is a work of art, it is the first cloud-based data warehouse where I’ve not felt the need to find another administration tool.

What’s Missing?

Not much.

All the basic data types are there, all the basic SQL statements are there, you get JDBC, ODBC and Python interfaces, and the documentation is a work of art. There could be a few more examples in documentation for some of the more obscure features of the product, but it is being updated on a frequent basis.

Regarding data types, I’ve always been puzzled why data warehouse vendors avoid geospatial data. After all, map-based data is a major feature of the current generation of visualisation tools, but it is lacking from most cloud data warehouse platforms. I’d like to think Snowflake will get around to this feature soon.

If I was being picky, I’d also call out the absence of a TIME data type. We work around it by the use of date/time functions to extract time portions of timestamp fields into text fields, but a native TIME type would be helpful.

The only real pain we experience is that Snowflake is currently restricted to Amazon US data centers. That has no impact on warehouse performance, but our connection times and data transfer rates are a little slower than I’d like. We can co-locate Ajilius instances in their AWS data centre for fast Snowflake connections, but if your data is on-premise in Australia, you’re going to incur a penalty if you’re moving terabytes into Snowflake. I’m assured that data centers in other parts of the world are on their way.


Try it. You’ll like it.

Ajilius makes it easy to build Snowflake data warehouses from your on-premise and cloud data. Let us know if you’d like a deeper discussion of Snowflake and Ajilius.

One year of Ajilius

birthdayAjilius is now one year old.

Just over 12 months ago, we announced a new data warehouse automation platform, designed for a modern data warehouse workload.

We delivered all our objectives for complete Kimball support, on-premise and cloud, three click migration between databases, and full cross-platform portability.

We published a four-release roadmap for V1, and we’ve met its quarterly delivery schedules.

We’re not fully profitable, but we now have enough customers that we’re sustainable and growing.

This year we’re embarking on an ambitious roadmap for V2, with a focus on Data Quality, Data Profiling and Data Discovery. Again, we have defined a quarterly release schedule, and V2.1 will be delivered in March (we’re in beta already).

We’re also stepping up our marketing this year, with our first conference attendance being pgDayAsia, in Singapore, from March 17-19. We’ll be speaking on Data Warehousing and PostgreSQL, as well as using the occasion to showcase Ajilius 2.1.

Here’s to a great 2016 in data warehousing!

IBM DB2: Back to DSN

The devs told me that getting DB2 and Informix drivers to work took a bit of fiddling. That was the understatement of 2015. The driver setup experience is so bad that we can’t include it in the Ajilius installer.

On every platform we needed to manually copy files around, adjust environment variables, sometimes patch libraries, and often it just didn’t work. We tried the Python ibm_db adapter, we tried IBM’s ODBC / CLI adapter, and experienced nothing but pain.

As a result, we’re dumping the work we’ve done on native adapters, and reverting to the use of ODBC DSNs for data sourcing against DB2 and Informix.

To use DB2 as an Ajilius data source, have your IT department deploy an appropriate ODBC connection on your Ajilius server, then create a DSN to your data source. Select “ODBC DSN” to create your data source in Ajilius, then enter the DSN name in the Database field.

It is sad that IBM’s quality has sunk so far. I started my IT career on IBM System/34 computers well over 30 years ago, and at various times worked on System/38 and AS/400. I used one of the first DB2 mainframe installations in Australia, followed by the first OS2/EE implementation of DB2, before using DB2 on Windows as the core of a successful ISV product. Later, I ran a DBA team that included DB2 on mainframe, Linux and Windows in its portfolio.

You couldn’t call me a DB2 hater with that background, but the current connectivity options are rubbish.

DB2 is not a bad DBMS, but what good is a DBMS  without great connectivity? I’d struggle to recommend it to anyone based on my most recent exposure and I’d definitely not recommend it to any Python developer.

IBM Bluemix

During the past week we have been testing the new data source adapters for DB2 and Informix. This time we’re using IBM’s Bluemix cloud to host our test databases.

The initial experience with Bluemix is awful. A bizarre labyrinth of errors about missing spaces and empty containers, all solved when you finally realise that the service you want to provision is only available in some regions, and your default is not one of them.

Depending on the region you have chosen, there are many supported databases including variants of Informix, DB2 and Netezza, as well as a variety of open source, big data and NoSQL products.

Once you’re up and running, the actual database experience is quite good. I like the data load feature, which quickly helps you to move test data into the database. The help around connectivity – CLI, ODBC and JDBC – is also good, with all the connection information clearly presented for each of the options.

The free database allowance for SQL DB (the old DB2 LUW) is quite generous, enough for us to complete all our testing. If you want more than the free tier supports, though, the next step jumps from zero to $500 per month. That is expensive compared to Azure SQL and AWS RDS databases.

dashDb (Netezza) and Time Series (Informix) start at around $55 per month, which is reasonable value compared to other vendors.

Our testing is focussed on connectivity and extracts, not in-database performance, so we can’t comment on how well the platform scales.

Well, time to get the testing finished, as this is the last task between us and Version 1.4.

MySQL, MariaDB and Aurora

This afternoon I signed off the enhanced data source adapters for MySQL, MariaDB and Amazon’s new Aurora database.

These adapters are compatible with both on-premise and cloud-hosted databases, with full Unicode support.

We’re fully tested against the Employee, Sakila and Classic Models sample databases, so now it is time to see some customer databases being loaded.

That’s another big step on the path to Version 1.4, only the enhanced DB2 adapter remains in the queue.

Adwords puzzle

For 10 months we’ve been testing combinations of Google Adwords keywords to drive traffic to Ajilius. Over that time we’ve got a pretty good idea of which keywords work.

As we planned a new campaign for the New Year, we decided to turn off Google advertising for a couple of weeks, to reset our no-advertising baseline.

The expected result was that traffic would go down, and it has, but not quite as much as we expected. That suggests search results rather than advertising may be driving a higher volume of traffic than we thought.

What was unexpected was the change to Google search results.

When running an Adwords campaign, if we searched for one of our terms such as “data warehouse automation” we were usually on result page 2 or 3, with many competitor ads above us.

Now that we’re not running a campaign we’re in roughly the same search position, but there are far fewer competitor advertisements showing above us.

Could it be that Google artificially inserts advertisements above yours, pressuring you to increase your budget to move you up the page?

That theory smells of tin-foil hats, but it is something for us to watch over the next few weeks.