New feature: Separate extract / load

We’ve now separated Extract and Load processes in Ajilius.

Previous versions coupled the extract and load processes for a table. To process the LOAD_PRODUCT table, for example, we would connect to the source database, extract the required rows, and load those rows to the warehouse, all within the same process.

We faced a situation recently where the largest extract for a warehouse came from a system that finished its end-of-day processing four hours ahead of the window allocated for the data warehouse load. The extract from this system was the longest-running task in the ELT process.

By separating the extract and load processes, we are now able to schedule the extract from this system (and others) to complete as early as possible, with the load to the warehouse occurring at a later time.

The data warehouse load window is made significantly smaller, giving the operations team more headroom if any errors occur during end-of-day processing in upstream systems.

You can, of course, continue to run extract and load processes at the same time if you prefer.

Ajilius. Making ELT fast and flexible.

New feature: Warehouse Script All

We’ve just released a new feature for Ajilius, a one-click generation of all scripts for a warehouse.

In previous releases you ran each script manually. This was a little tedious if you wanted to run an entire load process from within the Ajilius application.

Now, you can select the Script All option from the Warehouse menu, and a full set of ELT scripts will be generated from the session warehouse metadata.

You will be shown a script screen with the full warehouse DDL in the left pane, and a script to run all ELT processes in the right pane. Use the Create and Run buttons to perform an end-to-end build of your data warehouse.

Ajilius. Eliminating repetition in Data Warehouse Automation.

New competitor: BI Builders

A new week, and a new competitor. Norwegian company BI builders has popped into view with a very pretty looking dimensional warehouse solution for SQL Server users.

It is a desktop solution, generating SSIS, which puts it in the same category as Dimodelo, TimeXtender and (perhaps) WhereScape. LeapfrogBI is a little different, being a web-based solution.

This is becoming a very competitive section of the market!

New feature: load streams

Before today, loads to an Ajilius data warehouse were single streamed. That is, one process extracted and loaded one table. You could load hundreds of thousands of rows per second, you could load different tables in parallel, but if you had one very large table there was nothing you could do to make it faster.

We’ve just added a feature called “Load Streams”, that enables you to parallel load large tables. This is specifically designed for MPP target databases, but may also speed up SMP operations.

The Load Table screen now has an option to select a number of streams that will be loaded in parallel. This is a number between 1, and a maximum set for the target platform. On MPP platforms, this number is the number of nodes, while SMP platforms typically benefit from a number between 4 and 8.

On extract, data will be split into the number of streams you defined. Depending on the platform, the extract may be split in a round-robin fashion or by the column/s you have defined as a distribution key on selected MPP platforms.

Because there is overhead in splitting the data, and in handling concurrent writes in the DBMS, the improvement is not linear. For SMP platforms, large tables on SSD storage can be orders of magnitude faster than single threaded loads, while MPP platforms can approach the fraction represented by the number of streams.

Ajilius. We work harder to make your loads faster.

Analytics tools: Too clever?

I’ve recently been trialling a mix of analytics tools, including PowerBI, Qlik, Microstrategy, Tableau and others. One “feature” that has been annoying me is their attempt to automatically form relationships between tables, usually by matching column names. Bad idea.

I’ve had to spend far too long correcting models that have automatically matched the Product, Customer, Location, dimensions, simply because they had ID as the name of their surrogate key.

When I have a table named Customer, a suitable name for a surrogate key is ID. Not CustomerID. It is the Identity of the Customer, not the CustomerIdentity of the Customer.

When I reference that table from another, I might use role names such as PurchasingCustomerID or ReferringCustomerID. Role, table, column. Only in the most trivial implementations might I refer to it as simply CustomerID.

Further, there is no requirement that columns be uniquely named in a schema, nor that they follow any specific pattern. Especially when dealing with data models from legacy business systems, the practice of automatic matching is counter-productive.

Automatic matching is particularly wrong in dimensional modelling, where the use of surrogate keys is standard practice. One of the properties of a surrogate key is that it is a meaningless identifier. From a business perspective, my view of the CustomerID might be “BIGSTORE001”, and the surrogate value “3498” has no value at all.

I suggest that if you can’t determine a relationship from the existence of a foreign key, then you should probably stop trying. Build a flexible relationship editor and skip the automatic matching, as I’m going to have to spend more time finding and fixing tool errors than building it properly in the first place.

It causes me to wonder how many complex analytical models have errors introduced by this practice?

Automatic relationship management may be a feature which demonstrates well, based on carefully selected sample data, but which fails in the real world. I’d like to see it disappear.

New competitor: Optimal BI

It is great to see new entrants, bringing new approaches, to the data warehouse automation market. The days when one or two players had the market to themselves are drawing to a close …

This time it is Optimal BI, from New Zealand, with a product named Optimal Data Engine (ODE). It is a data vault product, evolving from consulting assignments, but not much more detail available at this time.

Follow their blog for new developments. http://optimalbi.com/blog/2015/07/03/ode-the-start-of-a-journey/

The old dinosaurs better get started on evolution!

New feature: Load Mask

There are times when you need to load multiple instances of data. A common example is found in retail, where you might receive one end-of-day file of sales transactions from each store that need to be loaded into one warehouse table.

We recently faced a more interesting case, where a SAAS company hosting identical application databases for many customers wanted to aggregate the same table from all of those databases.

We’ve now implemented a feature in Ajilius that makes iteration super-easy.

A combination of Database and Table Masks enable you to set wildcards over which a table load will iterate at run time. You do your metadata design using one instance of the load table as the source, then simply define the mask patterns to be used at run time.

Multiple Excel files? No problem. Multiple text files? Easy. Multiple tables? Simple. Same table from many databases? No sweat.

Ajilius. Helpful data warehouse automation.

Suspending Hadoop DW

We’re temporarily suspending work on Hadoop as a target platform for dimensional data warehouses.

Six to twelve months ago the future of the platform looked bright, with SQL-on-Hadoop vendors bringing out new versions at a rapid pace.

Lately, that pace has slowed to a crawl. We still don’t have wide-spread implementation of an UPDATE statement, and that makes it difficult to process slowly changing dimensions, and accumulating snapshot fact tables.

We’ve been working around this lack by reprocessing the data outside Hadoop. This meant reading and rewriting entire tables, and as the size of our test warehouses grew, it became clear that this was not a better solution than using an RDBMS.

When more complete SQL-on-Hadoop implementations become available we will revisit this decision. Until then, Hadoop will continue to be a supported data source for Ajilius.

Price changes

Working with our first customers and partners has exposed some issues with our initial pricing strategy.

  1. Member Edition (our free version) customers required more support hours than our paid version.
  2. The difference between Member and Subscriber edition was ambiguous when considering licensed, supported versions of Open Source databases.
  3. Subscriber Edition had insufficient margin to make it attractive to resellers.
  4. Both Subscriber and Sponsor Editions were perceived as “too cheap” by their target customers.

Accordingly, we’ve revised the pricing for Ajilius to take these issues into account.

From May 1, we will remove the Member Edition from our price list. Existing customers will continue their right to support, and access to all new versions, free of payment.

A new Evaluation Edition will be introduced. This will be a full-featured version of Ajilius, but with a 30-day time limit.

The annual licence for Subscriber Edition will be increased to USD 5,000pa, and Sponsor Edition will be increased to USD 50,000pa.

Remember, Ajilius is site licensed. That means you have the right to use Ajilius on any number of servers, by any number of developers, creating any number of data warehouses, on any number of data warehouse platforms.

Even with our price increase, you’ll still save tens of thousands of dollars over competing platforms, and still get to 100% ROI in a matter of days.

Ajilius. Committed to business value.

Handling SCD0 and SCD6

Most ETL and data warehouse automation products define a slowly changing dimension at the table. DIM_PRODUCT, for example, may be defined as a type-2 slowly changing dimension, with changes to the PRODUCT_NAME and PRODUCT_CATEGORY triggering new dimension rows.

When we were designing Ajilius, we realised that this traditional approach is very limiting, particularly when handling dimensions of type 0, 4 and 6. To refresh, a type-0 dimension has values that may never change, with the common example of an original-value column, such as PRODUCT_ORIGINAL_PRICE. A type-6 dimension combines elements of type-2 and type-3, in that it may have some columns that have previous values recorded, and some columns which trigger new dimension rows.

The “may have some columns” expression in the last sentence was our “Ah-Ha!” moment. Slowly changing dimensions should actually be recorded at the dimension attribute level, rather than the table level.

Ajilius enables you to set a change-type value for each non-key column in the dimension. By default we set it to SCD1, but you can change it to any other value through the dimension editor.

  • SCD0 (value never changes)
  • SCD1 (value changes in place without history being recorded)
  • SCD2 (value creates a new dimension row when it changes)
  • SCD3 (value has current- and previous-version recorded in the same dimension row)
  • SCD4 (value has historic versions recorded in a history outrigger)
  • SCD6 (a combination of 0 + 1 + 2 + 3 attributes in the same row)

To the best of our knowledge, Ajilius is the only data warehouse automation product that correctly supports the generation of DDL and DML to create and process all of these types of slowly changing dimension.