Streamlit Tutorial: A Step-by-Step Guide to Building Graph Applications in Streamlit

By David Hughes / Engineering & AI Practice Director

August 19, 2022


Reading Time: 6 minutes

Data science experts are increasingly tasked with demonstrating value early to stakeholders with new graph analytics, graph data science and other graph application projects. In this Streamlit tutorial, we’ll share how to support your value proposition with data by quickly developing a standalone application using Streamlit.

Streamlit Tutorial Overview: What is Streamlit?

You don’t need to be a web developer to build meaningful prototypes and applications. Streamlit is a Python-based rapid prototype development framework for using data, Python code, and machine learning models to build standalone applications.

While we believe graph technologies are essential to generating insights that can only be surfaced using connected data, many audiences first need to be oriented to the graph paradigm.

After reading this Streamlit tutorial, your team will be able to rapidly demonstrate valuable insights from clinical trial data analytics to various audiences — without needing to explain the underlying graph data structures.

We’ll use clinical trial data from to explore Streamlit’s features. We’ll also rapidly build an application that:

  • Visualizes clinical trials stored in a graph
  • Provides users with intuitive access to trial insights
  • Leverages data-heavy interrelationships for graph algorithmic analytics
  • Presents a prototype that demonstrates meaningful value to project stakeholders

Streamlit Features

Streamlit exposes many data and user API features through the Python programming language. Thanks to its helpful decorators and a blazingly fast iteration loop, Streamlit’s features make developing powerful applications easy for developers.

Streamlit boasts a variety of key components for application development, including:

  • Charting
  • Controls
  • Data
  • Input
  • Layout
  • Media
  • State management
  • Status
  • Text

Documentation for each component is clear and includes interactive examples.

In June 2022, Streamlit released a new feature that enables multi-page applications similar to a single-page app (SPA) model. Additionally, Streamlit offers an informative blog and a robust developer community that can accelerate your familiarity and use of this powerful framework.

Getting Started With Streamlit

With just one line of code, you can prepare to build a web application. You’ll need Python 3.7 or greater, pip, and your favorite code editor. To start using Streamlit, pip install it into a virtual environment like venv.

pip install streamlit

You can test that Streamlit was properly installed by calling its Hello app.

streamlit hello

After running these two lines in your editor, Streamlit’s Hello app should open in the browser, which indicates you’re ready to build an application. With just a few more lines of code, you can create your first application page. Save the following two lines of code in a file named ‘’.

import streamlit as st

st.title('My Streamlit App')

You’re now ready to test your application and immediately see the result in your browser. Simply run the following command in your editor’s terminal.

streamlit run

Streamlit Project Organization

Streamlit provides an easy project structure for building applications with many pages or views. In fact, with one Python file for the homepage and additional files in a pages directory, Streamlit will create an application, a left-sided menu, and the functionality to allow users to navigate your application.

Streamlit project organization

This project configuration results in a homepage with application navigation on the left. By prefixing filenames with an integer and underscore, you can quickly configure the page order of the side menu.

Streamlit dashboard

Create an App From Concise Code

With a screen split between a web browser and an editor, a developer can see code edit results in the browser in near real time. For the clinical trials homepage in this Streamlit tutorial, the code and resulting UI look like this:

st.title('Clinical Trials')
st.markdown ('## Overview')
st.write('This is a rapid prototype of a clinical trials application for ALS.      \n\
        It was built during a 6 hour flight from TPA -> SEA and presented to team upon landing')

col_1, col_2 = st.columns([1,1])

with col_1:
    st.markdown('## Key Features \n\
    - Listing \n\
    - Search \n\
    - Filtering \n\
    - Sorting \n\
    - Maps \n\
    - Ranking')

with col_2:
    st.markdown('## Improvements \n\
    - Revise NLP of criteria \n\
    - Load all trials \n\
    - Revise features based on feedback \n\
    - Explore mapping APIs \n\
    - Evaluate sucess criteria and TTV \n\
    - Start working with frontend team')
Streamlit Tutorial: Clinical Trials Home Page in Streamlit
Clinical trials homepage in Streamlit

This concise code embeds a logo image, sets the page title and section headers, creates a two-column layout, and populates the page’s content. It’s easy to see how a developer can quickly prototype an application and engage stakeholders in a rapid interaction cycle.

Streamlit Magic

Streamlit has many convenient helper functions and decorators to implement control flow, data caching, embedding images, dataframe rendering, and much more. For example, to quickly manage the applications control flow in response to a function, a user can implement “st.stop” and "st.success” in a similar manner to Python’s “try" and "except” error handling.

name = st.text_input('Name')
if not name:
  st.warning('Please input a name.')
st.success('Thank you for inputting a name.')

Decorators such as “@st.cache" can be used to check the signature and body of a function (i.e., name, code, inputs) to determine if the function is being run for the first time. If so, Streamlit will run the function and render the output.

If it’s not the first time, Streamlit will render the output from a stored local cache. The Streamlit tutorial and documentation pages for helper functions and decorators quickly demonstrate the efficiency of using Streamlit for application development.

Enable Interactive Data and Visualizations

In this Streamlit tutorial, we create our app and use core components like the map component, combined with extensions like Streamlit-Aggrid, to achieve pages that implement data searching, filtering, sorting, and visualization in just a few lines of Python code to produce an interactive and engaging data experience.

st.title("Clinical Trial Sites")

top_expander = st.expander('Data Grid', expanded=False)
bottom_expander = st.expander('Map', expanded=True)
with st.spinner('Getting Clinical Trials Sites...'):
    with top_expander:
        sites = get_trial_sites(driver)'Sites retrieved')
        gb = GridOptionsBuilder.from_dataframe(sites)
        gb.configure_default_column(groupable=True, value=True, enableRowGroup=True, aggFunc="sum", editable=True)
        gridOptions =
        grid = AgGrid(sites, gridOptions=gridOptions, enable_enterprise_modules=True,update_mode=GridUpdateMode.SELECTION_CHANGED, theme='material')

with bottom_expander:
    (some code omitted), zoom=None, use_container_width=True)
Streamlit Tutorial: ALS Clinical Trials Site Listing and Visualization
Streamlit tutorial: ALS clinical trials site listing and visualization

On this page, a user can review ALS trials, filter them using features such as recruiting sites, and then map the location of the resulting trial data.

Analyze the Data

In our data pipeline (which is not part of this Streamlit tutorial) we implemented the knowledge graph centrality algorithm PageRank to create graph embeddings to examine the most influential clinical trial research locations. The algorithm was configured to use the incoming relationships each location has from a recruiting clinical trial for ALS. The PageRank score was calculated then stored on each location’s node in the graph to enable our clinical trial data analytics capabilities.

Screen Shot 2022 06 28 at 11.03.12 AM
ALS clinical trial locations and PageRanks

The resulting score could lead a team to identify key locations for new neurodegenerative disease trials or be used in additional downstream machine learning efforts.


In this Streamlit tutorial, we explored how to rapidly provide value from clinical trial data to various audiences by creating a standalone interactive application using only Python code. In many similar projects, the time and effort to acquire, explore, ingest, identify insights, and provide interactive access to users is an extended project that requires significant time and work.

The Streamlit tutorial highlights our journey into how graph databases such as Neo4j and tools like Streamlit can shorten the time it takes for subject matter experts, data scientists, and other stakeholders to surface meaningful insights.

As you can see from this example, clinical trial data is rich with features that can support many use cases. If you’re currently running a clinical trial and are interested in how to improve your clinical trial data quality or using graph solutions to accelerate your initiatives, contact us today.

Read here for more specifics on clinical data trial analytics, here for the article on graph etl / Neo4j etl with a focus on clinical trial data, here for clinical trial data quality, as well as this article on patient journey mapping.

Read Other Graphable AppDev Articles:

Graphable helps you make sense of your data by delivering expert analytics, data engineering, custom dev and applied data science services.
We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we have deep expertise in Financial Services, Life Sciences, Security/Intelligence, Transportation/Logistics, HighTech, and many others.
Thriving in the most challenging data integration and data science contexts, Graphable drives your analytics, data engineering, custom dev and applied data science success. Contact us to learn more about how we can help, or book a demo today.

We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we thrive in the most challenging data integration and data science contexts, driving analytics success.
Contact us for more information: