Skip to content

Python Client API

This is the documentation of the latest version of the Python Client.

Getting started

There are two ways to run scripts using the Serenytics client:

  • Within our web application in the script section. This is especially useful when you need to run a script at regular intervals (with our scheduler).
  • Within a script executed locally. This is useful to access local datasources, or when you need to debug your script with your favorite IDE.

When working directly within our website, the serenytics package is already installed. When working locally, you need to install it: pip install --upgrade serenytics.

You need to import the package and create a Client object:

import serenytics

client = serenytics.Client()

Running script on your local machine

You must provide your api_key (obtained from the account menu).

client = serenytics.Client(api_key='YOUR_API_KEY')

Client

client.get_data_source_by_name()

Gets the data source object from the given name.

Parameters:

  • name: the name of the requested source

Returns: a DataSource object

Example:

import serenytics
client = serenytics.Client()

source = client.get_data_source_by_name(name='MY_DATA_SOURCE_NAME')

client.get_data_source_by_uuid()

Gets the data source object from the given uuid.

Parameters:

  • uuid: the uuid of the requested source

Returns: a DataSource object

Example:

import serenytics
client = serenytics.Client()

source = client.get_data_source_by_uuid(uuid='182018f9-a18a-4cf0-993f-9e3fa45d3f61')

Tip

It's easier to use the function get_data_source_by_name() but safer to use get_data_source_by_uuid() as the name of a data source may change, but not its uuid.


client.get_data_sources()

Returns all data sources of authenticated user.

Returns: a list of DataSource objects

Example:

import serenytics
client = serenytics.Client()

data_sources = client.get_data_sources()
for source in data_sources:
    print(source.name, source.type, source.uuid)

client.get_or_create_storage_data_source_by_name()

Gets the data source object from the given name if it exists, otherwise create it and return it.

Parameters:

  • name: the name of the requested source

Returns: a DataSource object

Example:

import serenytics
client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

Warning

If the source needs to be created by this function, it won't appear immediatly in the list of sources in the Data menu. You need to reload the page to refresh this list with this newly created storage.


client.get_script_by_uuid()

Gets the script object form the given uuid. This is useful to get a script and then run it within the current script (see script.run function).

Parameters:

  • uuid: the uuid of the requested script

Returns: a Script object

Example:

import serenytics
client = serenytics.Client()

script = client.get_script_by_uuid(uuid='MY_SCRIPT_UUID')
script.run(async_=False)

client.get_webapp_by_uuid()

Gets the web app (dashboard) object from the given uuid.

Parameters:

  • uuid: the uuid of the requested web app

Returns: a WebApp object

Example:

import serenytics
client = serenytics.Client()

webapp = client.get_webapp_by_uuid(uuid='182018f9-a18a-4cf0-993f-9e3fa45d3f61')

client.get_webapps()

Returns all webapps (dashboards) of authenticated user.

Returns: a list of WebApp objects

Example:

import serenytics
client = serenytics.Client()

webapps = client.get_webapps()
for webapp in webapps:
    print(webapp.folder_name, webapp.name)

client.retrieve_script_data()

Retrieves data previously stored with store_script_data().

Returns: what was stored using

Example:

import serenytics
client = serenytics.Client(script_id='YOUR_SCRIPT_ID')

data = client.retrieve_script_data()

Client initialization

To store and retrieve script data, you must have passed the script_id parameter at client initialization (see example above).


client.script_args

Returns the JSON object passed as a parameter to the script (see the documentation about the URL /api/script/:script_uuid/run to understand how to pass parameters to a script).

Returns: a dict

Example:

import serenytics
client = serenytics.Client()

params = client.script_args
param1 = params.get('my_param1')

client.send_email()

Sends an email with an optional dashboard attached as PDF. It can also be used to send an email with any generated file. For example, you can generate a CSV file from your data and send it by email with this function.

Warning: There is a limitation to the number of emails you can send within a day

Serenytics cannot be used for massive emailing. The default limit is 50 emails per day. Contact us when you need to increase this limit.

Parameters:

  • subject: string with the email subject
  • recipients: list of recipients
  • html: string containing the html body of the email
  • web_app_uuid[optional]: uuid of the dashboard to generate as PDF and include as attachment to the email.
  • pdf_file_name[optional]: name of the attached pdf file (default: WEBAPP_NAME.pdf)
  • file_to_attach[optional]: name of a local file to attach to the email (can be any type of file)
  • dashboard_payload[optional]: payload passed to the dashboard to filter its content (see embedded payload doc)

Example to send a dashboard:

import serenytics
client = serenytics.Client()

subject = "Here is your weekly report"
recipients= ['contact@serenytics.com']
html = "Hello,<br> please open the attachment to get your PDF report."
web_app_uuid = 'WEB_APP_UUID'

client.send_email(subject=subject, recipients=recipients, html=html, web_app_uuid=web_app_uuid)

Tip: Sending conditional emails

You can use our API to get some data from a data source and then test the obtained value to decide if your need to send an email or not. This can be used to create alerts based on the data.

Example to send a generated CSV:

import serenytics
import pandas as pd
import numpy as np
from subprocess import check_call

client = serenytics.Client()

subject = "Your CSV file"
recipients= ['john.doe@serenytics.com']
html = "Hello,<br> Here is the data."

# generate a random dataframe (could be obtained from a Serenytics datasource)
df = pd.DataFrame(np.random.randn(6,4),columns=list('ABCD'))

filepath = 'filename.csv'
df.to_csv(filepath, index=False, sep=";")

# in this example, we compress the CSV file
check_call(['gzip', filepath])
zip_file = filepath + '.gz'

client.send_email(subject=subject, recipients=recipients, html=html,
                file_to_attach=zip_file)

Example to send a filtered dashboard:

In this example, the dashboard needs to be a template dashboard and to require the variable __myCountry__. See here for the template dashboard documentation.

import serenytics
client = serenytics.Client()

subject = "Here is your weekly report"
recipients= ['contact@serenytics.com']
html = "Hello,<br> please open the attachment to get your PDF report."
web_app_uuid = 'WEB_APP_UUID'

payload = {
    '__myCountry__': 'FRANCE'
}

client.send_email(subject=subject, recipients=recipients, html=html, web_app_uuid=web_app_uuid,
    dashboard_payload=payload)

client.send_slack_msg()

Sends a simple text or an advanced message into a slack channel. To use this feature, you need to configure Slack in the Admin page of the Serenytics studio (in the Admin section, in the Configuration page). Messages can only be sent in the configured Slack workspace.

Parameters:

  • channel: the name of the channel to send the message to.
  • text_msg: the text of the message.
  • attachments[optional]: Slack attachments to specify more complex messages.

See the excellent Slack Message Builder to create and test your attachments.

Example to send a simple kpi value:

import serenytics
client = serenytics.Client()

source = client.get_data_source_by_uuid(uuid='MY_SOURCE_UUID')

kpi = source.get_value_formula("sales_of_the_day")

client.send_slack_msg('sales', 'Sales data', attachments=[
    {
        "text": f'Sales of the day: {kpi}',
        "color": "#2b4e82"
    }
])

client.store_script_data()

Stores json-serializable data in serenytics cloud to use for next script execution. The function retrieve_script_data() can be used in the next script execution to retrieve this data.

Parameters:

  • data: a json-serializable python object often a dict

Example:

import serenytics
client = serenytics.Client(script_id='YOUR_SCRIPT_ID')


data = {
  "last_execution": datetime.datetime.now()
}

client.store_script_data(data=data)

Client initialization

To store and retrieve script data, you must have passed the script_id parameter at client initialization (see example above).

Tip

The functions store_script_data() and retrieve_script_data() are useful to achieve incremental updates. In one execution, you store the data of execution of the script. In the next execution, you query the data added to your storage since this date, and import it into a Serenytics Push data source.


DataSource

source.batch()

Runs multiple data operations on the data source (insert/update/delete).

Parameters:

  • rows_to_insert[optional]: Rows to insert in the data source.
  • rows_to_insert_or_update[optional]: Rows to insert or update if they already exist in the datawarehouse.
  • rows_to_delete[optional]: Rows to delete. This must be a list of primary key values to delete.
  • primary_key[optional]: Name of the column containing the primary key. This parameter is required when you want to update or delete rows.
  • async_[optional]: Whether the API call returns immediately or waits for the data operations to be finished. Default: False.

!!! Note: Either rows_to_insert, rows_to_insert_or_update or rows_to_delete must be supplied to the function as a list of rows.

Warning

If async_=True is used, the function call is not blocking and doesn't wait for the data to be imported into serenytics datawarehouse. Then you won't have any guarantee that the data has really been imported. If you need a guarantee, at the expense of a longer function call, please use method async_=False as it is the default.

Example:

import serenytics
client = serenytics.Client()

source = client.get_data_source_by_name(name='MY_SQL_DATA_SOURCE')

data = [{'country': 'FR', 'name': 'John', 'quantity': 10},
        {'country': 'US', 'name': 'Mark', 'quantity': 2},
        {'country': 'UK', 'name': 'John', 'quantity': 1},
        {'country': 'US', 'name': 'David', 'quantity': 8}
]
source.batch(rows_to_insert=data)

Warning

The new inserted or updated data should have the same structure as old data (i.e. same columns). If not, new columns will be ignored and existing columns in the datawarehouse will be filled with NULL.

Warning

Only use column names in lower case without accentuated characters. This is a restriction from our internal datawarehouse. You can later use our source mapper in the user interface to rename your columns for the user designing dashboards.

source.get_columns()

Return the list of columns of the datasource (only column names and types, not the content of the columns).

Example

import serenytics
client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

# get all the columns
source_columns = source.get_columns()

# print the columns of the data source
print(source_columns)

source.get_data()

Extracts data from the data source. This function returns at most 20.000 rows. You can pass options to specify a query to group, filter, order and select only some columns from the datasource.

Use get_columns if you only need to retrieve the column names, not their content.

Parameters:

  • options[optional]: the options to pass to the data query to select/filter/groupby/order the result.

Returns: a Data object

Simple query to get all the data from a source

import serenytics
client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

# get all the data from a source
all_data = source.get_data()

# print the rows of the data source
print(all_data.rows)

Specifying options for the query

To execute an advanced query, you need to pass the options parameter. Here is an example of such a query:

options = {
            'order': 'row_by_row',
            'data_processing_pipeline': [{
                'group_by': ['country', 'year'],
                'order_by': [{'name': 'revenue', 'agg_function': 'sum', 'direction': 'desc'}],
                'select': [{'name': 'revenue', 'agg_function': 'sum'},
                           {'name': 'invoice', 'agg_function': 'dcount'}],
                'where': [{'name': 'revenue', 'op': '>', 'value': '20'}],
                'limit': 100,
            }]
        }

data = source.get_data(options=options)

order is an option to get data by rows or by columns. In the above example we choose to get the data by rows.

In the field data_processing_pipeline, you can set options to specify your query. Currently, you can only specify one stage for this API (i.e. data_processing_pipeline must be a list with a single object as in the above example).

In data_processing_pipeline[0], you can use the following fields:

  • group_by: the list of columns you want to group_by with.
    • if one column has a date format, you can provide the timeslot for grouping the data. In this case, you need to use the syntax: 'group_by': [{'name': 'myDate', 'timeslot': 'TIMESLOT_VALUE'}], where TIMESLOT_VALUE can be year, quarter, month, week, weekday, day, hour or minute.
  • select: the list of columns you want to retrieve. If you specified values in group_by, you must specify for each column the aggregate function to use: sum, min, max, avg, dcount.
  • order_by: list of columns to use to sort the values. If you want to sort using an aggregated column, you must specify the same aggregate as in the select field.
  • limit: maximum number of rows returned.
  • where: a list of conditions to filter the data. Conditions in the list will be handled in our engine with an AND operator. Each condition is a dictionary with name, opand value.
    • op values are:
      • <, >, <=, >=, !=, ==, in, !in
      • is_defined, is_not_defined (to check if a value is null, or is not null)
      • is_empty, is_not_empty (to check if a string is empty or not empty)

For operators in and !in, you must provide a string type for the value:

options = {
            'order': 'row_by_row',
            'data_processing_pipeline': [{
                'select': [{'name': 'revenue', 'agg_function': 'sum'},
                           {'name': 'invoice', 'agg_function': 'dcount'}],
                'where': [{'name': 'quarter', 'op': 'in', 'value': '["Q1","Q2"]'},
                          {'name': 'category', 'op': '!in', 'value': '[1,2,5,9]'}],
                'limit': 100,
            }]
        }

data = source.get_data(options=options)

Warning: Getting data from a join

When querying data from a join, instead of using the column prefix in the column name, you must provide the underlying datasource uuid. I.e: instead of writing myprefix.column_name, you have to write: uuid_datasource1.column_name.

Warning: Getting data from a measure

To query a measure, you must provide its name, but also its measureUuid field. Contact us to know how to obtain this information from your measures.


source.get_value_formula()

Return the result of a value formula (i.e. a formula that returns a single value, not a column) of the datasource.

  • formula_name: the name of the value formula on the datasource.

Returns: a single value, which is the result of the value formula requested.

import serenytics
client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

kpi = source.get_value_formula('formula_name')

print(kpi)

source.get_dataset()

Returns the full dataset from the source within a pandas dataframe.

Returns: a pandas dataframe object

Example

import serenytics
client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

# get all the data from a source in  a panda dataframe
df = source.get_dataset()

# print the shape of the dataframe
print(df.shape)

source.get_nb_rows()

Returns the number of rows in the source.

Example:

import serenytics
client = serenytics.Client()

data_sources = client.get_data_sources()
for source in data_sources:
    print("The source '{0}' contains {1} rows".format(source.name, source.get_nb_rows()))

source.invalidate_cache()

Invalidates data source cache.

All calls to source.get_data() or calls made when loading a dashboard using this source won't use current cache. Works whatever the cache config of the source.

Example:

import serenytics
client = serenytics.Client()

source = client.get_data_source_by_name(name='MY_SQL_DATA_SOURCE')
source.invalidate_cache()

source.push_data()

Appends one row of data to the source (must be a Storage source).

Parameters:

  • data: the row of data to append. It must be a dict containing column names as keys and values as values.

Example:

import serenytics
client = serenytics.Client()

data = {'country': 'FR', 'name': 'John', 'quantity': 10}

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.push_data(data=data)

Warning

This call is not blocking and doesn't wait for the data to be imported into serenytics datawarehouse. Then you won't have any guarantee that the data has really been imported. If you need a guarantee, at the expense of a longer function call, please use method batch and regroup all your rows of data in a list.

Warning

Only use column names in lower case. This is a restriction from our internal datawarehouse. You can later use our source mapper in the user interface to rename your columns for the user designing dashboards.


source.reload_data()

Loads data into the data source. This function can be used to reset the content of a storage data source and/or to modify its schema.

Parameters:

  • new_data: the rows to use to fill the data source. The accepted format is a list of dictionaries.

Warning

If the data source already has some data, it will be lost and replaced by the new data provided.

Example:

import serenytics
client = serenytics.Client()

data = [{'country': 'FR', 'name': 'John', 'quantity': 10},
        {'country': 'US', 'name': 'Mark', 'quantity': 2},
        {'country': 'UK', 'name': 'John', 'quantity': 1},
        {'country': 'US', 'name': 'David', 'quantity': 8}
]

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.reload_data(new_data=data)

performance tip

If possible, use reload_data_from_file(). It's a lot faster.

Warning

Only use column names in lower case. This is a restriction from our internal datawarehouse. You can later use our source mapper in the user interface to rename your columns for the user designing dashboards.

Handling dates

To be detected as a datetime by the Serenytics engine, a value must be provided in the ISO 8601 format (e.g. '2016-06-28T12:46:33').


source.reload_data_from_array()

Loads data into the data source, using list of columns and list of rows as input. This function can be used to reset the content of a storage data source and/or to modify its schema.

Parameters:

  • columns: list of string, each string is a column name
  • rows: list of list of values, each list of values is a row

performance tip

If possible, use reload_data_from_file(). It's a lot faster.

Warning

If the data source already has some data, it will be lost and replaced by the new data provided.

Example:

import serenytics
client = serenytics.Client()

columns = ['country', 'name', 'quantity']
rows = [['FR', 'John', 10],
        ['US', 'Mark', 2],
        ['UK', 'John', 1],
        ['US', 'David', 8]]

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.reload_data_from_array(columns=columns, rows=rows)

source.reload_data_from_dataframe()

Loads data into the data source, using a pandas dataframe as input. This function can be used to reset the content of a storage data source and/or to modify its schema.

Parameters:

  • df: the pandas DataFrame to use to fill the data source.
  • mode: either json or csv. In dict mode (default), your data is converted to a dictionary to be uploaded. In csv mode, the dataframe is converted to a CSV file to be uploaded.
  • force_types: see reload_data_from_file(). Can be used only in csv mode.
  • separator: the separator to use to convert your dataframe to a CSV file to be uploaded (default is |). Only for csv mode.

performance tip

If possible, use csv mode. It's a lot faster.

Warning

If the data source already has some data, it will be lost and replaced by the new data provided.

Example:

import serenytics
import pandas

client = serenytics.Client()
df = pandas.DataFrame({'country': ['FR', 'US', 'UK', 'US'],
                       'name': ['John', 'Mark', 'John', 'David'],
                       'quantity': [10, 2, 1, 8]})

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.reload_data_from_dataframe(df, mode='csv')

source.reload_data_from_file()

Loads data into the data source, using a file as input. This function can be used to reset the content of a storage data source and/or to modify its schema.

Parameters:

  • filename: path of the file to load (string). The file must be a csv file (.csv) and can be gzipped (.csv.gz) for better transfer speed and shorter script execution times. The file must be encoded in UTF-8 otherwise unknown characters will be replaced by ?.
  • separator[optional]: Default ,. CSV field separator, usually a comma or a semicolon.
  • force_types[optional]: Default None. Dict of column name and type to force column type in our storage. A type is a string to choose from: 'int32', 'int64', 'float32', 'float64', 'datetime', 'str', 'str(X)'. If not specified, our engine will infer the data types. int64 is an integer on 64 bits (8 bytes), also referred as BigInt in some databases. float64 is a double precision float. str(X) is a varchar column of length X.

Warning

If the data source already has some data, it will be lost and replaced by the new data provided.

Example:

import serenytics

filename = 'PATH_TO_LOCAL_FILE'

force_types = {
  'creation_date': 'datetime'
}

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.reload_data_from_file(filename=filename, separator=';', force_types=force_types)

source.reload_users_metadata()

Reloads users metadata with the data from this source.

The data source should have a first column named login with one row by user in the organization. Other columns will be stored as user metadata.

Warning

The user authenticated must be admin of this organization.

Example:

import serenytics

source = client.get_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.reload_users_metadata()

source.set_property()

Sets a property of the datasource. You need to call the save() function on the source to save your changes.

  • property_name: Name of the property to set.
  • property_value: Value of the property.

Warning

Contact support@serenytics.com to get the list of properties available for a given data source type.

Example:

import serenytics

source = client.get_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.set_property("server_ip","monserveursql.domain.com)
source.save()

source.save()

Saves the source object.

Example:

import serenytics

source = client.get_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.set_property("server_ip","monserveursql.domain.com)
source.save()

source.update_data_from_file()

Inserts rows from file in current data source. If a row in the file has the same primary key as a row in the existing table, the row in the existing table will be replaced by the new one. The primary key column in the final table will only contain unique values.

Parameters:

  • filename: path of the file to load. The file must be a csv file (.csv) and can be gzipped (.csv.gz) for better transfer speed and shorter script execution times.
  • primary_key: column name of the primary key (the key used to uniquely define the data rows, such as an id or a guid).
  • separator[optional]: Default ,. CSV field separator, usually a comma or a semicolon.

Example:

import serenytics

filename = 'PATH_TO_LOCAL_FILE'

source = client.get_data_source_by_name(name='MY_DATA_SOURCE_NAME')
source.update_data_from_file(filename=filename, primary_key='id')

Data

The Data object is returned by the function source.get_data().

data.columns

List of header names from the data source.

data.headers

List of headers and their types from the data source.

data.rows

List of rows obtained from the data source. Each row is a list of values.

data.get_as_dataframe()

Returns: a pandas DataFrame containing the data.

Example:

import serenytics
import pandas as pd

client = serenytics.Client()

source = client.get_or_create_storage_data_source_by_name(name='MY_DATA_SOURCE_NAME')

# get all the data from a source, as a pandas dataframe
df = source.get_data().get_as_dataframe()

# print the dataframe
print(df)

Script

script.run()

Runs the script from a script object. See client.get_script_by_uuid() to retrieve a script object.

Parameters:

  • params[optional]: Parameters to pass to the script execution (python dict).
  • async_ [optional]: Whether the method call returns immediately or waits for the script execution to be finished (default: True => it returns immediately).
  • raise_on_error[optional]: Whether this call raises an exception if the script execution fails (default: True).
  • log_result[optional]: Whether to log the script execution log (default: False).

Returns: None if async_=True. If async_=False, returns the log of the execution.

When you pass params to a Python script, you can retrieve these parameters within the python code with the client.script_args function.

You can also pass parameters to an ETL step. Using this feature, you can

  • pass variables to an ETL step and use these variables in the formulas of the input datasource. This is useful when a parameter loops within a range of values.
  • pass the append keyword (true/false) to specify if you want to reset the table or append the data.

See the second example below for more details.

Example:

import serenytics
client = serenytics.Client()

script = client.get_script_by_uuid(uuid='MY_SCRIPT_UUID')

params = {
  'business_unit': 'BU1',
  'country': 'US'
}
script.run(params=params, async_=False)

Example for ETL step:

Let's say that in your input datasource, you have a formula is_purchase_done_in_N_days_after_customer_creation that returns 1 if the purchase was done in the N days following the customer creation, else 0.

This function looks like:

IF date_diff([date_creation], [purchase_date], "days") > {{N_days}} THEN 1 ELSE 0

The {{N_days}} syntax means that N_days must be provided in a payload.

You can iterate through some values of N_days and concatenate the results of the ETL step:

import serenytics
client = serenytics.Client()

script = client.get_script_by_uuid(uuid='MY_SCRIPT_UUID')

for d in [30,60,90,120]:
    params = {
      'payload': {'N_days':d},
      'append': d>30
    }
    script.run(params=params, async_=False)

Using this code, in the first iteration (for d=30), the output of the ETL step will erase the output table content. All the following iterations will append their content, each one for a different value of N_days.


WebApp

The WebApp object models a Serenytics dashboard.

webapp.folder_name

Get the folder name of the web app as seen on Serenytics studio.

Example:

import serenytics
client = serenytics.Client()

webapp = client.get_webapps()[0]

print webapp.folder_name

webapp.set_custom_css()

Applies custom CSS on webapp.

Parameters:

  • css: the css to apply (string).

Example:

import serenytics
client = serenytics.Client()

webapp = client.get_webapps()[0]

CSS = """
.srn-widget-header {
    background-color: grey;
    color: #e19500;
}
"""
webapp.set_custom_css(CSS)
webapp.save()

webapp.set_custom_html_header()

Applies the given html header to the web app instead of just using the name by default.

Parameters:

  • html: the html content of the header to apply (string).

Example:

import serenytics
client = serenytics.Client()

webapp = client.get_webapps()[0]

HTML = """
<div style="overflow:hidden">
  <div style="float:left">
      <h1 style="padding: 20px; margin: 0px;">Serenytics</h1>
  </div>

  <div style="float:right">
      <a href="http://www.serenytics.com" target="_blank">
      <img style="padding:20px;" src="https://www.serenytics.com/images_srn/logo_txt.png" width=200px>
      </a>
  </div>
</div>
"""
webapp.set_custom_html_header(HTML)
webapp.save()

The above example applies a header containing 2 columns (a title and an image).


webapp.warm_up_cache(payload)

Warms up Serenytics data cache for this dashboard. This function loads the dashboard to fill its cache but it does not invalidate the datasource cache (you have do it yourself if required before calling this function).

Parameters:

Example:

import serenytics
client = serenytics.Client()

webapp = client.get_webapps()[0]
webapp.warm_up_cache()

webapp.get_list_of_data_sources()

Returns the list of datasources used by the widgets of this dashboard. The return object is a list of dicts with keys name and uuid.