Learn How To Create a Layered QlikView Architecture

Qlik Architecture Layer_Full - Qlik Logo embedded in rocks

Evidence of Qlik Layered into the Fossil Record.

Proper QlikView architecture is one of the pillars of a successful implementation. When us Qlik developers start out, we feel accomplished merely loading data into an app and displaying a few sexy charts on the screen. But obviously, as we scale up within our environment, this one stop approach does not work anymore. There are many variations of a layered Qlik architecture with associated pros and cons. So I thought it would be a good exercise to lay out all the pieces so you can decide which layers you will want in your implementation. This article can be applied to both QlikView and Qlik Sense architecture.

A Note About Data Retention in the Levels

Generally for apps that serve to stage data into a QVD (Extraction and Transformation), there is no need to retain that data in the QVW. This means that at the end of these scripts, you should drop the tables out of the app. This will make the saving of your apps very fast and will also speed up the opening of the app for the next reload.

Usually I do collect meta-data and display it on the main page as explained here, but generally there are no other visualizations in these apps. End-Users should never see them anyway.

Extraction Layer

Tooth Extraction

Data Extraction should not be as painful as dental extraction.

The idea of an extraction layer is to have a layer of QVWs with the sole purpose of loading raw data from our databases and storing that data to QVDs. There are no transformations at all, not even renaming of fields.

This is typically the first layer I create because it has the wonderful of advantage of making development more efficient. Since QVDs load much faster than data from a database, I can create the QVDs once and then rely on the QVDs for my primary development, turning the connections back to the database only before promoting the app to production. Typically, I use 2 sets of data source script lines for each load and simply comment and uncomment as needed.


QlikView code snippet for raw data extration


The extraction layer is also the primary way in which we scale Qlik. The extraction layer provides us a duplicate of the database tables which we can then load as many times as we want into subsequent Qlik apps without having to go back to the original database. Adding incremental load logic to this makes the extraction layer even more efficient, limiting the database pull to only the most recent or changed records. This is your primary weapon in the scalability war. I believe all implementations should have this layer.

QlikView Architecture extraction layer


Input:                   Original Data Sources

Output:                Extracted QVDs

Pros:                     Speeds development, scalability mechanism, isolates db connections

Cons:                    Very Minor – Up-front time to set-up apps and tasks, Multi-Step data refresh process, Lineage confusion


Transformation Layer

Butterfly Transformation

Make your code beautiful.

The transformation layer is a way to massage the data for straight-forward analysis. This is where you will rename fields, apply business logic, do some dimensional joins, and possibly aggregate fact tables to differing levels of granularity.  Generally, we will load data from our extraction QVDs, complete our transformation steps and store the resulting tables into Transformed QVDs.

In this Qlik Sense architecture layer, I usually add in any non-database data sources rather than including them in the extract process. It serves little purpose, for example, to extract the data from a spreadsheet into a QVD. There is no reason why we can’t start with these kinds of sources in the Transformation layer.

We should also talk a little about granularity. Mostly when it comes to transactional data, we load all the rows into QlikView because we want to be able to drive from summary all the way down to the smallest detail. But many dashboards are better served with a summary level of data. Executive Multi-Subject Dashboards might fall into this category. So if fact tables should have versions that are aggregated to one level or another, this should also be done in the transformation layer.

The major advantage of the Transformation layer is that we can use one set of business logic and conventions to drive the entire implementation. It is better to apply logic once, than to have to repeat the development in eight different Qlik scripts. And because all renaming is done in one location, we can help eliminate confusion over naming conventions that would naturally differ between apps.

QlikView Architecture transformation layer

Input:                    Extracted QVDs

Output:                 Transformed QVDs, Aggregated QVDs

Pros:                      Maintenance efficiency, scalability mechanism, universal business definitions

Cons:                     Up-front time to set-up apps and tasks, Multi-Step data refresh process, Lineage confusion


DataMart Layer

Traditionally, the UI has been the final layer. This is where we load the transformed QVDs and create our UI Dashboards with the charts and graphs the client has requested. The concept of the Data Mart layer is simply separating the data model piece from the UI. We will load the Transformed QVDs and create any needed concatenated keys, apply autonumber if desired, add our calendar, and any other unassociated tables needed. This creates a completed data model for the app as evidenced in the Table Viewer. But no User Interface will be added. Please note that in this layer, we will retain the data in the QVW.

OK, this is reaching for relevance.

OK, this is reaching for relevance.

Instead, we create yet another QVW that Binary Loads the Data Mart app and then create the user interface, complete with all the necessary charts and graphs and the professional appearance we are all used to experiencing in Qlik.

There are a case-specific advantages to this layer. Firstly, it serves to isolate development from designer activities. Using this method, we do not need to give QVD read or database connection permissions to Qlik designers. Secondly, we can reuse common Qlik data models in multiple applications. One data model can support an executive view and an analyst view.

My view on the DataMart layer is that some organizations can make good use of this, while others may not see an advantage.

QlikView Architecture datamart layer

Input:                    Transformed QVDs

Output:                 QVW Data Model

Pros:                      Isolates data model development from dashboard design, data model reusability

Cons:                     Might outweigh Pros? – Up-front time to set-up apps and tasks, Multi-Step data refresh process, Lineage confusion, extraneous data potential


QlikView Architecture: Putting it All Together

Below are some figures to help you visualize the different layers within the entire architecture. If you want to remove a level, you will simply combine two QVW rows into one, eliminating any QVD rows in between if there are any.

ExtractionQlikView Architecture all layers highlighting extraction


TransformationQlikView Architecture all layers highlighting transformation

Data Mart

QlikView Architecture all layers highlighting datamart


Final Thoughts

I encourage you to spend time designing your Qlik architecture appropriately, thinking about the future of the tool in your business. It is well worth the up-front effort, and also a little more of a hassle to implement if you have already gone down a seemingly simpler road. You might also enjoy an article on Living Qlik highlighting techniques for looping through files in your load script. Do you have thoughts on this topic? Would love to hear them.

14 comments… add one
  • Reply marcel hug August 24, 2016, 9:06 pm

    Very good article. And that exactly is how we built our qv architecture. Even with some more additions such as for metadata and security handling etc.

  • Reply Denis Woods August 25, 2016, 8:20 pm


    Nice post, just a few questions I would like to ask…

    What naming conventions do you suggest?
    So for example if I was selecting a “Product” table direct from a Oracle Database, do you suggest calling this a T_Product?
    What if I then need to transform this Product table again for another Department (sales) where they only want to see certain Manufacturers, what would you suggest calling this QVD file name then T_Sales_QVD?
    And would you suggest creating two different Documents or have the script in one Document?


  • Reply Aaron Couron August 25, 2016, 10:52 pm

    Thanks for the compliment. My suggestion for naming would be to prefix with an E_ for “Extract” or a T_ for “Transform”. After that, I usually do subject, then data source. So I might have a file called T_Item_Sage500.qvd meaning it is the Transformed Item data from Sage500. If you create different versions of Item data, you can add to that taxonomy. Ex. T_Item_BrandX_Sage500.qvd.
    Happy Qliking,

  • Reply Denis Woods August 26, 2016, 8:23 am

    Thanks for the reply! I will use this naming convention for future reference, with regards to a qvd file that can be used across all Live QlikView Documents, do you have any suggestions as to where to put this in a folder structure (i.e. “E_NTNAMES.qvd”, then put this into a folder called Global?) Or do you have any blogs on folder structures and source control?

  • Reply Johan Aldrin August 26, 2016, 8:39 am

    Nice post.
    Just one thing about when it’s a benefit of having data from Excel in qvd format. In an enterprise environment, the business users are often responsible for the different Excel documents needed. This means that they can edit the Excel and here is a risk that they make changes to the Excel that is not supported when used in the qvd generator or document. I someone make a change that is not supported it will crash the load. If you instead load all Excel in separated qvd generator and store it as an qvd file the only thing that will crash is the actual load of that qvd generator, the qvd file from the last load will still be there and will still be usable. Just a thought:).

  • Reply Aaron Couron August 27, 2016, 1:54 pm

    I like that thought. as you are moving the risk to a “back-of-store” process rather than closer to UI.
    I have not created an article that organizes folders, but Steve Dark of QuickIntelligence wrote a good article about this topic here: http://www.quickintelligence.co.uk/qlikview-folder-structures/

  • Reply Sam pra September 10, 2016, 4:05 am

    Good article. If you have a huge fact table of millions or billions of records and you are doing a binary load out of the original qvw file, wouldn’t you effectively be duplicating the massive amount of data for each application downstream. So 10 dashboards = 10 copies of the facts and dimensions. Is this practical?

  • Reply Aaron Couron October 12, 2016, 11:29 am

    Sam pra,
    Thanks for the question. In the case of 10 downstream dashboards that each use a binary load from the same billion row app, you would definitely be duplicating the data to each dashboard resulting in 11 copies each with a billion rows.
    But you have to realize that the data only resides on harddisk which is not “expensive”. The real problem is when these get opened by users on the server as each will require large amounts of RAM. Depending on number of users and the size of the QlikView server(s) this COULD be practical but my guess is that there would be a better way.
    My suggestion in a case like this would be to create smaller purpose based data models (limited to specific date ranges, or aggregated to weekly or monthly) that then service your UI apps through binary loads. This gives the users specific data models in UI that are smaller, will perform better and are built with specific use-cases in mind.
    Thanks again for the question.

  • Reply Lanora91 November 3, 2016, 3:17 pm

    I see your website is in the same niche like my site.
    Do you allow guest posting? I can write interesting and unique posts for you.
    Let me know if you are interested.

  • Reply Emily August 30, 2017, 11:25 pm

    I am very new to Qlik, and found this post very interesting. Would the three layers be recommended if using Qlik over a star schema data warehouse?

  • Reply Aaron Couron August 31, 2017, 1:22 pm


    My answer to your question would still be yes even though a data warehouse has done much of the heavy lifting. This because the 3 tier system will still improve development times and provides scalability. It will also improve your reload times, because reloading one app from a data warehouse will take longer than an app loading the same data from a qvd.

  • Reply Ram September 19, 2017, 10:04 am

    Hi This is an awesome article, giving the right kind of input into budding world of QlikView Users, Developers and Admins.

    Please write more articles like these and keep spreading smiles

Leave a Comment

Join us at Living QlikView

Never miss a post.

We Never Spam. NEVER.

Show Buttons
Hide Buttons