New feature: Load Mask

There are times when you need to load multiple instances of data. A common example is found in retail, where you might receive one end-of-day file of sales transactions from each store that need to be loaded into one warehouse table.

We recently faced a more interesting case, where a SAAS company hosting identical application databases for many customers wanted to aggregate the same table from all of those databases.

We’ve now implemented a feature in Ajilius that makes iteration super-easy.

A combination of Database and Table Masks enable you to set wildcards over which a table load will iterate at run time. You do your metadata design using one instance of the load table as the source, then simply define the mask patterns to be used at run time.

Multiple Excel files? No problem. Multiple text files? Easy. Multiple tables? Simple. Same table from many databases? No sweat.

Ajilius. Helpful data warehouse automation.

Suspending Hadoop DW

We’re temporarily suspending work on Hadoop as a target platform for dimensional data warehouses.

Six to twelve months ago the future of the platform looked bright, with SQL-on-Hadoop vendors bringing out new versions at a rapid pace.

Lately, that pace has slowed to a crawl. We still don’t have wide-spread implementation of an UPDATE statement, and that makes it difficult to process slowly changing dimensions, and accumulating snapshot fact tables.

We’ve been working around this lack by reprocessing the data outside Hadoop. This meant reading and rewriting entire tables, and as the size of our test warehouses grew, it became clear that this was not a better solution than using an RDBMS.

When more complete SQL-on-Hadoop implementations become available we will revisit this decision. Until then, Hadoop will continue to be a supported data source for Ajilius.

Price changes

Working with our first customers and partners has exposed some issues with our initial pricing strategy.

  1. Member Edition (our free version) customers required more support hours than our paid version.
  2. The difference between Member and Subscriber edition was ambiguous when considering licensed, supported versions of Open Source databases.
  3. Subscriber Edition had insufficient margin to make it attractive to resellers.
  4. Both Subscriber and Sponsor Editions were perceived as “too cheap” by their target customers.

Accordingly, we’ve revised the pricing for Ajilius to take these issues into account.

From May 1, we will remove the Member Edition from our price list. Existing customers will continue their right to support, and access to all new versions, free of payment.

A new Evaluation Edition will be introduced. This will be a full-featured version of Ajilius, but with a 30-day time limit.

The annual licence for Subscriber Edition will be increased to USD 5,000pa, and Sponsor Edition will be increased to USD 50,000pa.

Remember, Ajilius is site licensed. That means you have the right to use Ajilius on any number of servers, by any number of developers, creating any number of data warehouses, on any number of data warehouse platforms.

Even with our price increase, you’ll still save tens of thousands of dollars over competing platforms, and still get to 100% ROI in a matter of days.

Ajilius. Committed to business value.

Handling SCD0 and SCD6

Most ETL and data warehouse automation products define a slowly changing dimension at the table. DIM_PRODUCT, for example, may be defined as a type-2 slowly changing dimension, with changes to the PRODUCT_NAME and PRODUCT_CATEGORY triggering new dimension rows.

When we were designing Ajilius, we realised that this traditional approach is very limiting, particularly when handling dimensions of type 0, 4 and 6. To refresh, a type-0 dimension has values that may never change, with the common example of an original-value column, such as PRODUCT_ORIGINAL_PRICE. A type-6 dimension combines elements of type-2 and type-3, in that it may have some columns that have previous values recorded, and some columns which trigger new dimension rows.

The “may have some columns” expression in the last sentence was our “Ah-Ha!” moment. Slowly changing dimensions should actually be recorded at the dimension attribute level, rather than the table level.

Ajilius enables you to set a change-type value for each non-key column in the dimension. By default we set it to SCD1, but you can change it to any other value through the dimension editor.

  • SCD0 (value never changes)
  • SCD1 (value changes in place without history being recorded)
  • SCD2 (value creates a new dimension row when it changes)
  • SCD3 (value has current- and previous-version recorded in the same dimension row)
  • SCD4 (value has historic versions recorded in a history outrigger)
  • SCD6 (a combination of 0 + 1 + 2 + 3 attributes in the same row)

To the best of our knowledge, Ajilius is the only data warehouse automation product that correctly supports the generation of DDL and DML to create and process all of these types of slowly changing dimension.

Measuring load speeds

We recently had a situation where our load speeds were reported as being much slower than a competitor. This surprised me, because I knew that our loader could saturate the network from the source server, and I wondered how our competitor could be faster.

Luckily, the evaluator liked Ajilius, and did a little digging on our behalf. It turned out that the culprit was not our performance, but the competitor’s measurement technique.

When we load data into the warehouse from a source database, there are basically four steps that we need to perform:

  • Query
  • Extract
  • Load
  • Commit

The Query step is where we execute a query on the remote data source, such as “select c1,c2,c3 from t1 where c1 > 9467843”. The extract step is where we transfer the results of that query to the loader. The Load step moves those rows into the warehouse. Finally, the Commit step commits the load transaction/s. Depending on the source and warehouse, Ajilius may overlap one or more of those steps.

When we measure load performance we put a timer call before the Query, and again after the Commit. The elapsed time is the total time taken to extract and load the required data from the source system to the warehouse. This represents real-world performance, the type you need to measure if batch windows are important to you.

Our competitor had a different view of the world. Their measurement of performance was to take the time immediately before the Load step, and immediately after it. They claimed that this was a measurement of “load” performance. I guess they’re technically correct, but knowing just that one part of the job doesn’t help you to assess performance against real-world requirements.

When the customer repeated the tests, this time measuring the elapsed time for the whole job, the results were virtually neck and neck. I’m not surprised because, as I said earlier, I knew we were capable of saturating the relatively slow network in the customer’s development lab.

Ajilius performance tests? Always welcome.

WhereScape faking Data Vault?

Update (2016-07-09): It was true then, but it isn’t true now. WhereScape is now working with Dan Linstedt, the creator of the Data Vault methodology, to deliver automated Data Vault 2.0 solutions.

https://www.wherescape.com/blog/blog-posts/2016/july/wherescape-partners-with-data-vault-inventor-dan-linstedt/

 

I’m a little confused by recent claims about Data Vault and WhereScape Red.

Every WhereScape demonstration I have seen has been Load, Stage, Dimension, Fact, Cube. The tutorial is Load, Stage, Dimension, Fact, Cube. The training is Load, Stage, Dimension, Fact, Cube. The UI boils down to Load, Stage, Dimension, Fact, Cube.

Does that look like Data Vault to you?

Nowhere do I see Hub, Link and Satellite. Just Load, Stage, Dimension, Fact, Cube.

If it looks like Dimensional, presents like Dimensional, trains like Dimensional, and is documented like Dimensional, then there is a pretty good chance it IS Dimensional, and any claim to the contrary is just faking it.

If you want Data Vault, you probably need to be looking at BIReady or Quipu.

We’re proud of the fact that Ajilius is a data warehouse automation company based on Dimensional Modelling. We believe in doing one thing well, not trying to be anything to everyone. If you want a Dimensional data warehouse, buy the product that is firmly committed to this technique.

Ajilius. Keeping it real in data warehouse automation.

DWA vs ETL

A common question is the difference (or similarity) between Data Warehouse Automation and traditional ETL tools.

I like to use an example from my iPad – the difference between the apps Mortgage Calc andPages.

Pages is a spreadsheet. You can edit rows and columns of data, and create formulae using that data.

Mortgage Calc is an app that calculates mortgage payments.

Now, I could write a mortgage calculator in Pages. I could possibly make it look like the Mortgage Calc app rather than a spreadsheet. But which calculations do I use? Which tax rules apply? Are there stamp duties payable? In other words, I have to do a lot of research, a lot of programming, and a lot of testing to make sure I’ve got the basics right. And I’ve also got to maintain that spreadsheet as the rules change.

With Mortgage Calc, I’ve paid a few dollars for an application that has saved me many hours of research and development, and which I’m trusting to give me accurate calculations. In this case, Mortgage Calc is better than Pages, because it does one job, and does it well.

That is the difference between DWA and ETL. An ETL tool is a general purpose programming environment for moving and transforming data between systems. It provides components, in one form or another, which you put together to accomplish one or more tasks.

DWA, on the other hand, is built to do just one task, which is building the code associated with a data warehouse. Ajilius builds dimensional data warehouses. We build transactional, periodic snapshot and accumulating snapshot fact tables; Type 0, 1, 2 and 3 slowly changing dimensions; and move data from multiple data sources into a consolidated presentation layer.

You could do all of that with an ETL tool, but it would be like writing a complex mortgage calculator in a spreadsheet – time consuming, not well understood, and prone to error.

Ajilius generates fast, error free code, that can be easily migrated between data warehouse platforms, at the press of a button.

That’s the advantage of tools like Ajilius. We deliver business value, faster.

Our join editor sucks

I’ve spent today reviewing and discussing our alpha-test feedback.

The best feedback related to the browser UI. The worst feedback to the join editor.

The join editor is used to define the joins between staging tables. We need to know how the business keys from the table being joined relate to the data already in the table.

The way the current version works is that you choose the ‘Join Another Table’ option from the Stage Table menu, then select the table you want to join, finally choosing the join type (ie, inner, left outer) and the join columns. You’d repeat this for as many tables as you wanted to join.

Users gave solid feedback that this did not represent their use cases. It forces the user to make early decisions about the sequence in which tables will be added, doesn’t handle deletion of tables from the join well, and fragments the user’s mental model of the join structure.

We worked through a few alternatives this afternoon, decided on an approach, and now Minh (our Vietnam developer) is due to have it completed by the end of the week. We’ll go back to the alpha users for a re-check on this feature, but then everything should be clear for the beta.

Be a DW hero with PostgreSQL

PostgreSQL and Ajilius can make you a DW Hero.

Lots of organisations don’t know about PostgreSQL, or are afraid that it might not perform. Here are eight simple steps to prove the value of PostgreSQL for data warehousing.

(1) Develop on SQL Server

Your IT management has probably asked you to deliver your data warehouse on SQL Server. There is nothing technically wrong with that decision, but it is going to cost a lot of money if you take it into production. Never mind, let’s humour them, go ahead and implement your development environment. We emphasise development, because using MSDN or SQL Server Developer Edition licences will have a negligible cost.

(2) Develop using Ajilius

Design and build your data warehouse using Ajilius. Use the power of data warehouse automation to generate a fully scripted, high performance data warehouse. Get all your user and technical documentation, and start testing user queries and reports. You’ll save hundreds of hours of development time using Ajilius, and eliminate the risks of bad ETL.

(3) Set up a PostgreSQL server

Download PostgreSQL and set up a separate development server, running on your choice of operating system. Create an empty database, and record the server name, database name, user-id and password.

(4) Clone your Metadata Repository

Go to the Warehouse List screen, and select the Clone Warehouse option. An independent copy of the metadata repository will be created under your chosen name.

(5) Change your Warehouse target

Go to your Warehouse List screen, and select the Change Warehouse option. Now select PostgreSQL as your target type, and enter the server details that you recorded in step (3).

(6) Generate Scripts

Go to your Warehouse List screen, and select the Generate Scripts option. Check each of the Create, Update, Schedule and Migrate script options, and enter the directory where scripts are to be written.

(7) Deploy

You now have a 100% compatible version of your original data warehouse, running on PostgreSQL. Because you used Ajilius to build your data warehouse, you are guaranteed portability across data warehouse platforms. All your extracts and loads have been fully replicated, your data has been migrated, and you can repeat your test queries and reports.

(8) Profit!

Here is where you become a DW Hero.

Demonstrate to your project stakeholders their data warehouse running on PostgreSQL. Discuss the cost savings that will come from running on this platform. Show that you’ve already done the migration, in just minutes of work and a few hours of processing time. Imagine proving that you can save hundreds of thousands of dollars in production licensing, for no cost to your organisation.

That’s HEROIC!

We’re the only data warehouse automation company that fully supports PostgreSQL data warehouses as a first-class citizen, and we’ve done so from the very first lines of code we wrote.

Read more about Ajilius, and the power of the PostgreSQL Data Warehouse.

Virtual loads and dimensions

Some data warehouse automation products do a good job on green-fields projects, but make it very difficult to integrate into an existing data warehouse architecture. Some don’t do it at all, while others require that you reverse engineer and rebuild the ETL / ELT processing before you can integrate new tables. We’re different.

Ajilius makes it easy to integrate with an existing data warehouse through Virtual Tables.

The purpose of a Virtual Table is to provide a mechanism to create Ajilius metadata over an existing table, without having to re-load or re-process its data. There are two types of Virtual Tables used in Ajilius – Virtual Loads and Virtual Dimensions.

Virtual Loads exist to define data that has already been loaded into the warehouse by other processes. This scenario is used where the table is to be processed in conjunction with data completely controlled by Ajilius, and may also be used in drip-feeding scenarios.

Virtual Dimensions enable you to integrate complete dimensions that have been created and maintained by external processes. If your existing warehouse has already built the perfect SCD2 dimension, there is no need to re-design and re-write that table just to fit it into our metadata structures.

Once defined, Virtual Tables work exactly the same as any other Ajilius load or dimension table.

Virtual Tables are a powerful concept when you are adding the power of Ajilius to an existing data warehouse environment.