Data quality performance

inspectionWe’re introducing data quality screens in V2.1, to be released at the pgDayAsia conference in March. In this release, data quality screens implement data type, range and expression testing on selected tables and columns.

The last week has been spent running performance tests, to identify the overhead added by these screens.

On average, we currently process 500,000 values per second.

For example, if your daily fact extract has 10 million rows, and each row comprises 20 columns, you have a total of 200 million values to screen. At 500,000 values per second, a full data quality screen of every column would add around six minutes to your batch.

To put that number in scale, 10 million is roughly the number of bets placed at the TAB on Melbourne Cup day. Or around the number of sales transactions done by a major department store chain in one week. Or the total number of motor vehicles sold in one year, world-wide, by Toyota Motor Company. In other words, 10 million rows is a LOT of data, and we’re going to completely validate it with full data screening, guaranteeing that no bad data is loaded from every row and column in your extract, in less than 10 minutes.

Wow!

And even though I wrote “Wow!” about that validation, we are working to make it even faster by release day.

Ajilius. Putting the quality in data warehouse automation.

Ajilius loves Snowflake

Snowflake HeartAlong with Amazon Redshift and Azure SQL Data Warehouse, Ajilius does cloud data warehouse automation for Snowflake Elastic Data Warehouse.

We don’t just support Snowflake, in a short space of time it has become a favourite cloud data warehouse platform. We had a great time working with the Snowflake team during the development of their Python adapter.

Here are some of the things that we love about Snowflake.

Price

Our Snowflake development and demonstration platform cost around AUD300 per month, on a lousy exchange rate. We pay a monthly storage cost, then pay for just the processing we need, when we need it.

For the smallest development machines Redshift may be slightly cheaper, but once you scale to multi-terabyte production workloads the advantage shifts to Snowflake unless you are prepared to commit to three-year reserved instances.

Given the rate of change in the cloud data warehouse market, we believe that long-term commitments are not in the interests of most customers, and Snowflake has a price/performance advantage.

Microsoft Azure SQL Data Warehouse is still in preview, and we can’t comment on comparative pricing at this time.

Scale

Scaling our Snowflake platform takes just seconds. In comparison, we’ve seen cluster resizing on Amazon take many minutes, and we’ve seen it take longer in customer sites.

Snowflake instances can scale from 1 to 128 8-core nodes. That is a huge amount of compute power, making Snowflake suitable for workloads of any size. At the lower end, we see Snowflake as an ideal platform for mid-market customers as its entry point and pricing model is so flexible.

We do a lot of Ajilius work without incurring any processing costs. This is because DDL operations are performed on the database, not the warehouse (see Features), and we don’t need to start a warehouse until we start actually loading, selecting or modifying data. The majority of our development and test work is done on a single node, with occasional scaling for performance tests.

Support

I’ve never had better support from a data warehouse company, especially when we were not known to the vendor, and not spending huge amounts of money. From sales, to pre-sales, to support, and even right into engineering, we’ve had amazing engagement from every level of the company.

Snowflake people respond to emails, pick up the phone, and respond to support requests with speed. We’ve never waited more than a couple of hours for a response to an issue, and that response has always been highly relevant, never of the type “have you unplugged and plugged in the keyboard” variety.

The Snowflake team is knowledgeable, enthusiastic, and committed to success.

Features

One intriguing feature of Snowflake is its avoidance of distribution keys, partitions, etc., in the database. This avoids one of the big design challenges present in both Redshift and Azure, where the wrong distribution method can really damage your performance. One day I’ll have a beer with Snowflake’s designers and figure out how this works, but for now, all I know is that it works well.

Better described as “quirky” is Snowflake’s terminology of database and warehouse. A “database” is a collection of schemas and data. A “warehouse” is the compute configuration that works on databases, to me they’d have been better off using a name like “server”. A powerful feature is the ability for multiple “warehouses” to act on a “database”, with different configuration settings. For example, an ETL warehouse might use very high scale to compress the ELT time, while a Browse warehouse might run for a long time at low scale for refreshing data used by BI tools like Tableau and Qlik Sense.

Another feature we love is the Snowflake administrative console, where we can not only administer databases, warehouses and users; but also review performance history and execute ad-hoc queries. The user interface for the console is a work of art, it is the first cloud-based data warehouse where I’ve not felt the need to find another administration tool.

What’s Missing?

Not much.

All the basic data types are there, all the basic SQL statements are there, you get JDBC, ODBC and Python interfaces, and the documentation is a work of art. There could be a few more examples in documentation for some of the more obscure features of the product, but it is being updated on a frequent basis.

Regarding data types, I’ve always been puzzled why data warehouse vendors avoid geospatial data. After all, map-based data is a major feature of the current generation of visualisation tools, but it is lacking from most cloud data warehouse platforms. I’d like to think Snowflake will get around to this feature soon.

If I was being picky, I’d also call out the absence of a TIME data type. We work around it by the use of date/time functions to extract time portions of timestamp fields into text fields, but a native TIME type would be helpful.

The only real pain we experience is that Snowflake is currently restricted to Amazon US data centers. That has no impact on warehouse performance, but our connection times and data transfer rates are a little slower than I’d like. We can co-locate Ajilius instances in their AWS data centre for fast Snowflake connections, but if your data is on-premise in Australia, you’re going to incur a penalty if you’re moving terabytes into Snowflake. I’m assured that data centers in other parts of the world are on their way.

Recommendation

Try it. You’ll like it.

Ajilius makes it easy to build Snowflake data warehouses from your on-premise and cloud data. Let us know if you’d like a deeper discussion of Snowflake and Ajilius.

One year of Ajilius

birthdayAjilius is now one year old.

Just over 12 months ago, we announced a new data warehouse automation platform, designed for a modern data warehouse workload.

We delivered all our objectives for complete Kimball support, on-premise and cloud, three click migration between databases, and full cross-platform portability.

We published a four-release roadmap for V1, and we’ve met its quarterly delivery schedules.

We’re not fully profitable, but we now have enough customers that we’re sustainable and growing.

This year we’re embarking on an ambitious roadmap for V2, with a focus on Data Quality, Data Profiling and Data Discovery. Again, we have defined a quarterly release schedule, and V2.1 will be delivered in March (we’re in beta already).

We’re also stepping up our marketing this year, with our first conference attendance being pgDayAsia, in Singapore, from March 17-19. We’ll be speaking on Data Warehousing and PostgreSQL, as well as using the occasion to showcase Ajilius 2.1.

Here’s to a great 2016 in data warehousing!