Guide: Using the Table API and Python to import data from a Tulip Table into Power BI

Hello all,

we have been asked about ways to automatically import all data from a Tulip Table into Power BI.
This guide will show you how to accomplish this for records of a table leveraging the Tulip Table API and Python. This article assumes basic knowledge about HTTP requests and Python coding.

There are 3 main steps:

  1. Within Tulip: Setting up the Tulip API Bot and grabbing the unique Table ID
  2. Within a Python IDE of your choice: Adapting and testing the code
  3. Within Power BI: Setting up the data connection

Let’s get started!

Setting up the Tulip API Bot and grabbing the unique Table ID

Follow this guide to set up a Table API bot up for to the point where you obtain an API Key and a Secret:

The bot only needs the “tables:read” scope.

You will need the API Key and the Secret later. So make sure to copy them to somewhere where you’ll find them again. (Be aware that with those credentials one can access your tables from outside of Tulip. So do select a secure place and/or delete them again later.)

Then open the table from which you want to import the data to Power BI. The “random” part of the URL of that page is the unique ID of the table. Copy this as well to somewhere for later use.


That’s it for steps within Tulip. Next up: Python.

Writing and testing the Python Code


To be able to use whatever data we pull from Tulip within Power BI later, we need to change it into a pandas dataframe. So you need to make sure that you have pandas installed on your machine.

What is Pandas?

How do I install it?

There are multiple way to install Pandas. You can find more information here:

Code concept

After having pandas installed, it’s time to write and test the code. The general concept the code is going to follow is:

  1. Define the API endpoint, parameters and authorization
  2. Send the request and store the returned data.
  3. Convert the returned data into a Pandas Dataframe

As the Table API is currently limited to returning a maximum of 100 records per query, there is some additional code needed to send multiple request with an offset to cover all entries if there are more than 100 records in your table. Also, there should be some exception handling included in order to make it easier to figure out the root cause if the request fails.

The Coding

I recommend to write and test the code in a Python IDE of choice (i.e. Visual Studio Code). That way you can more easily debug and optimize it before copying it into Power BI. There, debugging and testing is very difficult and not recommended.

Below, you can find example code that will return all records of a table as a Pandas Dataframe. Please refer to the comments in the code for more details.

To use this code you need to replace:

  • <instance> with the name of your instance
  • <api-key> with the API Key of your Bot
  • <secret> with the Secret of your Bot
  • <unique-ID-of-table> with the unique ID of the table you want to import

When you run it, it should output the data from your Tulip Table with the “print” command in the last row of the code. Feel free to adapt the code as you like or take pieces of it for your own approach.

Example Code

Disclaimer: Please note that this code is exemplary only. We will not assume any responsibility for consequences of using it as is or in any adapted form.
import pandas as pd
import requests
import math

# Basic Configuration
INSTANCE_URL = "https://<your-instance>"
API_KEY = "<your-api-key>"
SECRET = "<your-secret>"

LIMIT = 100  # Maximum entries returned per api request. Standard and also max: 100

# API endpoint
endpoint = "/api/v3/tables/<your-unique-ID-of-table>/records"

# Parameters dictionary
# KEEP includeTotalCount and limit in any case for records endpoint
parameters = dict()
parameters["includeTotalCount"] = "true"
parameters["limit"] = LIMIT

# Compile URL
url = INSTANCE_URL + endpoint

# Send request and store returned data - with error handling
    r = requests.get(url, params=parameters, auth=(API_KEY, SECRET))
except requests.exceptions.HTTPError as error:  # HTTP errors
    raise SystemExit(error)
except requests.exceptions.RequestException as error:  # Requests errors
    raise SystemExit(error)

# Turn returned data into JSON and then a pandas dataframe
data_json = r.json()
tulip_df = pd.DataFrame.from_dict(data_json)

# Rerun with offset, if more entries than the limit are available

number_of_entries = int(r.headers["X-Total-Count"])
if number_of_entries > LIMIT:

    # Calculate number of iterations
    required_iterations = math.ceil(number_of_entries / LIMIT) - 1
    for x in range(required_iterations):
        # Increment and set offset
        offset = (x + 1) * LIMIT
        parameters["offset"] = offset

        # Send request and store returned data - with error handling
            r = requests.get(
                url, headers=headers, params=parameters, auth=(API_KEY, SECRET)
        except requests.exceptions.HTTPError as error:  # Requests errors
            raise SystemExit(error)
        except requests.exceptions.RequestException as error:  # HTTP errors
            raise SystemExit(error)

        # Convert to json, then a dataframe and append to overall data
        data_json = r.json()
        tulip_df = tulip_df.append(pd.DataFrame.from_dict(data_json))

# Print data for review in IDE. Does not impact Power BI functionality

Setting up the data connection within Power BI

Now that we have the code to pull the data in the form of a Dataframe, we just need to add this as a data source in Power BI. To do that, follow the steps below:

  1. Select “Get data” => “More…”

  2. Select “Other” => “Python Script” and confirm with “Connect”. This will open a pop-up window into which you can copy the Python code you tested in your IDE.

  3. Copy your Python code and confirm with “OK”. The image shows the example code provided in this article.

  4. In the next window, check the relevant dataframes in the folder on the left. (If you adapt the example code, this will show one dataframe called “tulip_df” as in the screenshot below.)

    Click on “Load” to load the data into Power BI as shown in the right part of the window or select “Transform Data” to open the Query Editor to further transform the dataset.

  5. Once set up, you can refresh the data as you are used to from other data sources.

If there are issues with the execution of the code inside of Power BI, you might want to check out the setup instruction for Python in Power BI here:

Next steps

The code in this guide shows you how to set up the connection for the content of one table. Of course. But you are of course not limited to this. You could write some other code to import data from multiple tables, or even combine data within Python already.

But I hope this guide helps you get started with this approach to integrating your Tulip data within Power BI. If you have any feedback or questions, leave a comment!


Faster way it to use Web connection and Power query to make connection between power bi and tulip table.
No code and easier
Why make it simple when you can make it complicated :wink:


@mathieu.rey thanks for your comment!

You are right. The Web connection enables to query the API as well and connect to Tulip Tables easily. Though, to my knowledge, in a limited way. As our “records” endpoint has a return limit of 100 records, this will not directly allow you to grab all the data from a table if it has more than 100 entries. This is why this article was written.

If you know a way to grab all values via the Web connection, it would be great if you could share it!

1 Like

I let you watch that previous post

Thanks for the quick update. This will be very helpful for others who are looking to use the Power Query way! It’s always good to have multiple approaches to choose from.