Data science experts are increasingly tasked with demonstrating value early to stakeholders with new graph analytics, graph data science and other graph application projects. In this Streamlit tutorial, we’ll share how to support your value proposition with data by quickly developing a standalone application using Streamlit.
Streamlit Tutorial Overview: What is Streamlit?
You don’t need to be a web developer to build meaningful prototypes and applications. Streamlit is a Python-based rapid prototype development framework for using data, Python code, and machine learning models to build standalone applications.
While we believe graph technologies are essential to generating insights that can only be surfaced using connected data, many audiences first need to be oriented to the graph paradigm.
After reading this Streamlit tutorial, your team will be able to rapidly demonstrate valuable insights from clinical trial data analytics to various audiences — without needing to explain the underlying graph data structures.
We’ll use clinical trial data from ClinicalTrials.gov to explore Streamlit’s features. We’ll also rapidly build an application that:
- Visualizes clinical trials stored in a graph
- Provides users with intuitive access to trial insights
- Leverages data-heavy interrelationships for graph algorithmic analytics
- Presents a prototype that demonstrates meaningful value to project stakeholders
Streamlit exposes many data and user API features through the Python programming language. Thanks to its helpful decorators and a blazingly fast iteration loop, Streamlit’s features make developing powerful applications easy for developers.
Streamlit boasts a variety of key components for application development, including:
- State management
Documentation for each component is clear and includes interactive examples.
In June 2022, Streamlit released a new feature that enables multi-page applications similar to a single-page app (SPA) model. Additionally, Streamlit offers an informative blog and a robust developer community that can accelerate your familiarity and use of this powerful framework.
Getting Started With Streamlit
With just one line of code, you can prepare to build a web application. You’ll need Python 3.7 or greater, pip, and your favorite code editor. To start using Streamlit, pip install it into a virtual environment like venv.
pip install streamlit
You can test that Streamlit was properly installed by calling its Hello app.
After running these two lines in your editor, Streamlit’s Hello app should open in the browser, which indicates you’re ready to build an application. With just a few more lines of code, you can create your first application page. Save the following two lines of code in a file named ‘my_app.py’.
import streamlit as st st.title('My Streamlit App')
You’re now ready to test your application and immediately see the result in your browser. Simply run the following command in your editor’s terminal.
streamlit run my_app.py
Streamlit Project Organization
Streamlit provides an easy project structure for building applications with many pages or views. In fact, with one Python file for the homepage and additional files in a pages directory, Streamlit will create an application, a left-sided menu, and the functionality to allow users to navigate your application.
This project configuration results in a homepage with application navigation on the left. By prefixing filenames with an integer and underscore, you can quickly configure the page order of the side menu.
Create an App From Concise Code
With a screen split between a web browser and an editor, a developer can see code edit results in the browser in near real time. For the clinical trials homepage in this Streamlit tutorial, the code and resulting UI look like this:
This concise code embeds a logo image, sets the page title and section headers, creates a two-column layout, and populates the page’s content. It’s easy to see how a developer can quickly prototype an application and engage stakeholders in a rapid interaction cycle.
Streamlit has many convenient helper functions and decorators to implement control flow, data caching, embedding images, dataframe rendering, and much more. For example, to quickly manage the applications control flow in response to a function, a user can implement “
"st.success” in a similar manner to Python’s “
"except” error handling.
name = st.text_input('Name') if not name: st.warning('Please input a name.') st.stop() st.success('Thank you for inputting a name.')
Decorators such as “@
st.cache" can be used to check the signature and body of a function (i.e., name, code, inputs) to determine if the function is being run for the first time. If so, Streamlit will run the function and render the output.
If it’s not the first time, Streamlit will render the output from a stored local cache. The Streamlit tutorial and documentation pages for helper functions and decorators quickly demonstrate the efficiency of using Streamlit for application development.
Enable Interactive Data and Visualizations
In this Streamlit tutorial, we create our app and use core components like the map component, combined with extensions like Streamlit-Aggrid, to achieve pages that implement data searching, filtering, sorting, and visualization in just a few lines of Python code to produce an interactive and engaging data experience.
st.title("Clinical Trial Sites") top_expander = st.expander('Data Grid', expanded=False) bottom_expander = st.expander('Map', expanded=True) with st.spinner('Getting Clinical Trials Sites...'): with top_expander: sites = get_trial_sites(driver) log.info(f'Sites retrieved') gb = GridOptionsBuilder.from_dataframe(sites) gb.configure_side_bar() gb.configure_default_column(groupable=True, value=True, enableRowGroup=True, aggFunc="sum", editable=True) gb.configure_selection(selection_mode="single",pre_selected_rows=,use_checkbox=False) gridOptions = gb.build() grid = AgGrid(sites, gridOptions=gridOptions, enable_enterprise_modules=True,update_mode=GridUpdateMode.SELECTION_CHANGED, theme='material') with bottom_expander: (some code omitted) st.map(data=sites_df, zoom=None, use_container_width=True)
On this page, a user can review ALS trials, filter them using features such as recruiting sites, and then map the location of the resulting trial data.
Analyze the Data
In our data pipeline (which is not part of this Streamlit tutorial) we implemented the knowledge graph centrality algorithm PageRank to create graph embeddings to examine the most influential clinical trial research locations. The algorithm was configured to use the incoming relationships each location has from a recruiting clinical trial for ALS. The PageRank score was calculated then stored on each location’s node in the graph to enable our clinical trial data analytics capabilities.
The resulting score could lead a team to identify key locations for new neurodegenerative disease trials or be used in additional downstream machine learning efforts.
In this Streamlit tutorial, we explored how to rapidly provide value from clinical trial data to various audiences by creating a standalone interactive application using only Python code. In many similar projects, the time and effort to acquire, explore, ingest, identify insights, and provide interactive access to users is an extended project that requires significant time and work.
The Streamlit tutorial highlights our journey into how graph databases such as Neo4j and tools like Streamlit can shorten the time it takes for subject matter experts, data scientists, and other stakeholders to surface meaningful insights.
As you can see from this example, clinical trial data is rich with features that can support many use cases. If you’re currently running a clinical trial and are interested in how to improve your clinical trial data quality or using graph solutions to accelerate your initiatives, contact us today.
Graphable delivers insightful graph database (e.g. Neo4j consulting) / machine learning (ml) / natural language processing (nlp) projects as well as graph and Domo consulting for BI/analytics, with measurable impact. We are known for operating ethically, communicating well, and delivering on-time. With hundreds of successful projects across most industries, we thrive in the most challenging data integration and data science contexts, driving analytics success.
Want to find out more about our Hume consulting on the Hume knowledge graph / insights platform? As the Americas principal reseller, we are happy to connect and tell you more. Book a demo by contacting us here.
Check out our article, What is a Graph Database? for more info.