Guide: Using the Table API and Python to import data from a Tulip Table into Power BI

Hello all,

we have been asked about ways to automatically import all data from a Tulip Table into Power BI.
This guide will show you how to accomplish this for records of a table leveraging the Tulip Table API and Python. This article assumes basic knowledge about HTTP requests and Python coding.

There are 3 main steps:

  1. Within Tulip: Setting up the Tulip API Bot and grabbing the unique Table ID
  2. Within a Python IDE of your choice: Adapting and testing the code
  3. Within Power BI: Setting up the data connection

Let’s get started!

Setting up the Tulip API Bot and grabbing the unique Table ID

Follow this guide to set up a Table API bot up for to the point where you obtain an API Key and a Secret: https://support.tulip.co/en/articles/3983173-how-to-use-the-table-api

The bot only needs the “tables:read” scope.

You will need the API Key and the Secret later. So make sure to copy them to somewhere where you’ll find them again. (Be aware that with those credentials one can access your tables from outside of Tulip. So do select a secure place and/or delete them again later.)

Then open the table from which you want to import the data to Power BI. The “random” part of the URL of that page is the unique ID of the table. Copy this as well to somewhere for later use.

image

That’s it for steps within Tulip. Next up: Python.

Writing and testing the Python Code

Pandas

To be able to use whatever data we pull from Tulip within Power BI later, we need to change it into a pandas dataframe. So you need to make sure that you have pandas installed on your machine.

What is Pandas?

https://en.wikipedia.org/wiki/Pandas_(software)

How do I install it?

There are multiple way to install Pandas. You can find more information here: https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html

Code concept

After having pandas installed, it’s time to write and test the code. The general concept the code is going to follow is:

  1. Define the API endpoint, parameters and authorization
  2. Send the request and store the returned data.
  3. Convert the returned data into a Pandas Dataframe

As the Table API is currently limited to returning a maximum of 100 records per query, there is some additional code needed to send multiple request with an offset to cover all entries if there are more than 100 records in your table. Also, there should be some exception handling included in order to make it easier to figure out the root cause if the request fails.

The Coding

I recommend to write and test the code in a Python IDE of choice (i.e. Visual Studio Code). That way you can more easily debug and optimize it before copying it into Power BI. There, debugging and testing is very difficult and not recommended.

Below, you can find example code that will return all records of a table as a Pandas Dataframe. Please refer to the comments in the code for more details.

To use this code you need to replace:

  • <instance> with the name of your instance
  • <api-key> with the API Key of your Bot
  • <secret> with the Secret of your Bot
  • <unique-ID-of-table> with the unique ID of the table you want to import

When you run it, it should output the data from your Tulip Table with the “print” command in the last row of the code. Feel free to adapt the code as you like or take pieces of it for your own approach.

Example Code

Disclaimer: Please note that this code is exemplary only. We will not assume any responsibility for consequences of using it as is or in any adapted form.
import pandas as pd
import requests
import math

# Basic Configuration
INSTANCE_URL = "https://<your-instance>.tulip.co"
API_KEY = "<your-api-key>"
SECRET = "<your-secret>"

LIMIT = 100  # Maximum entries returned per api request. Standard and also max: 100

# API endpoint
endpoint = "/api/v3/tables/<your-unique-ID-of-table>/records"

# Parameters dictionary
# KEEP includeTotalCount and limit in any case for records endpoint
parameters = dict()
parameters["includeTotalCount"] = "true"
parameters["limit"] = LIMIT

# Compile URL
url = INSTANCE_URL + endpoint

# Send request and store returned data - with error handling
try:
    r = requests.get(url, params=parameters, auth=(API_KEY, SECRET))
    r.raise_for_status()
except requests.exceptions.HTTPError as error:  # HTTP errors
    raise SystemExit(error)
except requests.exceptions.RequestException as error:  # Requests errors
    raise SystemExit(error)

# Turn returned data into JSON and then a pandas dataframe
data_json = r.json()
tulip_df = pd.DataFrame.from_dict(data_json)

# Rerun with offset, if more entries than the limit are available

number_of_entries = int(r.headers["X-Total-Count"])
if number_of_entries > LIMIT:

    # Calculate number of iterations
    required_iterations = math.ceil(number_of_entries / LIMIT) - 1
    for x in range(required_iterations):
        # Increment and set offset
        offset = (x + 1) * LIMIT
        parameters["offset"] = offset

        # Send request and store returned data - with error handling
        try:
            r = requests.get(
                url, params=parameters, auth=(API_KEY, SECRET)
            )
            r.raise_for_status()
        except requests.exceptions.HTTPError as error:  # Requests errors
            raise SystemExit(error)
        except requests.exceptions.RequestException as error:  # HTTP errors
            raise SystemExit(error)

        # Convert to json, then a dataframe and append to overall data
        data_json = r.json()
        tulip_df = pd.concat([tulip_df,pd.DataFrame.from_dict(data_json)])

# Print data for review in IDE. Does not impact Power BI functionality
print(tulip_df)

Setting up the data connection within Power BI

Now that we have the code to pull the data in the form of a Dataframe, we just need to add this as a data source in Power BI. To do that, follow the steps below:

  1. Select “Get data” => “More…”

  2. Select “Other” => “Python Script” and confirm with “Connect”. This will open a pop-up window into which you can copy the Python code you tested in your IDE.

  3. Copy your Python code and confirm with “OK”. The image shows the example code provided in this article.

  4. In the next window, check the relevant dataframes in the folder on the left. (If you adapt the example code, this will show one dataframe called “tulip_df” as in the screenshot below.)

    Click on “Load” to load the data into Power BI as shown in the right part of the window or select “Transform Data” to open the Query Editor to further transform the dataset.

  5. Once set up, you can refresh the data as you are used to from other data sources.

If there are issues with the execution of the code inside of Power BI, you might want to check out the setup instruction for Python in Power BI here: https://docs.microsoft.com/en-us/power-bi/connect-data/desktop-python-scripts

Next steps

The code in this guide shows you how to set up the connection for the content of one table. Of course. But you are of course not limited to this. You could write some other code to import data from multiple tables, or even combine data within Python already.

But I hope this guide helps you get started with this approach to integrating your Tulip data within Power BI. If you have any feedback or questions, leave a comment!

8 Likes

Faster way it to use Web connection and Power query to make connection between power bi and tulip table.
No code and easier
Why make it simple when you can make it complicated :wink:

3 Likes

@mathieu.rey thanks for your comment!

You are right. The Web connection enables to query the API as well and connect to Tulip Tables easily. Though, to my knowledge, in a limited way. As our “records” endpoint has a return limit of 100 records, this will not directly allow you to grab all the data from a table if it has more than 100 entries. This is why this article was written.

If you know a way to grab all values via the Web connection, it would be great if you could share it!

1 Like

I let you watch that previous post

1 Like

Thanks for the quick update. This will be very helpful for others who are looking to use the Power Query way! It’s always good to have multiple approaches to choose from.

One thing to note here is that if end user needs this report in PBI service they are restricted to personal gateway, I believe as of right now the “Enterprise/Regular Gateway” service does not support python script as a data source. Python/R Visualization are supported regardless of gateway type.

Has anyone found a way of doing something similar in Tableau?

Looks like you can use python there as well.

Hello Stefan,

I tested to get data from a tulip table using this python code and I encountered “NameError: name ‘headers’ is not defined”. I needed to delete “headers=headers,” around line 50. Kindly fix the original code for community members. Thanks.

Akira

Hello Akira,

thanks for your feedback! I seem to have missed this bit when I last updated the code - sorry for that. I have corrected the code.

Best,
Stefan

2 Likes

good afternoon @curtis.hale, welcome to the Tulip community!! it would be great if you could introduce yourself at: Welcome to the Community, let's meet!!

regarding the error your seeing, would you mind emailing support@tulip.co?? they’ll be able to help resolve this issue.

So I tried to follow this guide, but it didn’t work. I’m totally new to python. Got some HTTPErrors etc. Need to debug what’s wrong so just tried a simple get request to see if that works in a first place, but I get Forbidden 403 response. What am I doing wrong?
image

Hey @erikpets -

Welcome to the Python club! I love python.

Here is a gist I maintain with many of the common API functions you might need:

403 errors are usually driven by bad credentials (which is an easy thing to mess up. When creating your Bot, you need to use the ‘Auth Header’ to hit the API, not the bot secret. You will see this in my code example too.

If that isn’t the issue, it could be that your bot doesn’t have the right access to be able to read from tables. Tables:read is the setting you need enabled.

Pete

Thanks @Pete_Hartnett for the tips.
It seems I found the issue with the original code, just had to remove the angle brackets <> from the KEY and SECRET.

However, I get the following futurewarnings when I run the code:

Seems pandas is updating it’s functions. Maybe worthwhile to also update the example code in the original post here? @stefan

Thanks!

Hey @erikpets -

Thanks for calling this out! I updated the script above.

I don’t currently have BI access to verify functionality, but reach out if you do run into issues.

Pete

Hi @Pete_Hartnett
Thanks for the updated script.
Though when I try to run it in Jupyter, it returns a following error:
image

Hey @erikpets -

Hmmm. Wanna try this and see if you get an error?

import pandas as pand
import requests
import math

# Basic Configuration
INSTANCE_URL = "https://<your-instance>.tulip.co"
API_KEY = "<your-api-key>"
SECRET = "<your-secret>"

LIMIT = 100  # Maximum entries returned per api request. Standard and also max: 100

# API endpoint
endpoint = "/api/v3/tables/<your-unique-ID-of-table>/records"

# Parameters dictionary
# KEEP includeTotalCount and limit in any case for records endpoint
parameters = dict()
parameters["includeTotalCount"] = "true"
parameters["limit"] = LIMIT

# Compile URL
url = INSTANCE_URL + endpoint

# Send request and store returned data - with error handling
try:
    r = requests.get(url, params=parameters, auth=(API_KEY, SECRET))
    r.raise_for_status()
except requests.exceptions.HTTPError as error:  # HTTP errors
    raise SystemExit(error)
except requests.exceptions.RequestException as error:  # Requests errors
    raise SystemExit(error)

# Turn returned data into JSON and then a pandas dataframe
data_json = r.json()
tulip_df = pand.DataFrame.from_dict(data_json)

# Rerun with offset, if more entries than the limit are available

number_of_entries = int(r.headers["X-Total-Count"])
if number_of_entries > LIMIT:

    # Calculate number of iterations
    required_iterations = math.ceil(number_of_entries / LIMIT) - 1
    for x in range(required_iterations):
        # Increment and set offset
        offset = (x + 1) * LIMIT
        parameters["offset"] = offset

        # Send request and store returned data - with error handling
        try:
            r = requests.get(
                url, params=parameters, auth=(API_KEY, SECRET)
            )
            r.raise_for_status()
        except requests.exceptions.HTTPError as error:  # Requests errors
            raise SystemExit(error)
        except requests.exceptions.RequestException as error:  # HTTP errors
            raise SystemExit(error)

        # Convert to json, then a dataframe and append to overall data
        data_json = r.json()
        tulip_df = pand.concat([tulip_df,pand.DataFrame.from_dict(data_json)])

# Print data for review in IDE. Does not impact Power BI functionality
print(tulip_df)

My suspicion is that the pd name is being used somewhere else (like the filename) so that reference to Pandas isn’t actually making it there. See this guide:

Pete

Ok, so my apologies for spamming here and creating fuss about the script. I got it working eventually, and the issue was not in the script, but how the python was set up on my PC. And the root cause of this all is me being totally new to Python.

Initially I followed the links in this guide to install the Pandas and found suggestions to install the whole Anaconda software that includes Python and lots of other things. The reason why I got different errors when running the script is that PBI can have some issues when the Python is hosted by conda environment. I picked this up from the Microsoft’s documentation:

The solution for me was:

*Uninstall the Anaconda software

  • Install Python directly from their web page
  • pip install pandas using command prompt
  • pip install matplotlib using command prompt
1 Like

Hey @erikpets -

Gotcha! I love python, but that love starts after it is installed and setup. I find the library management process to be incredibly confusing, too.

Glad you got it sorted out!
Pete

1 Like

Anyone interested in a more native Power BI connector? Please start a thread in this forum. https://ideas.powerbi.com/ - If you start a thread paste the link back here so others can add.

It will help us get prioritised for a certified native connector.
Thanks

I have created a thread in Power BI idea forum for a native Tulip Power BI connector.
(Link here)
Hopefully it gets some traction.

1 Like