TxGemma is a collection of compact and efficient tools designed to support therapeutic research. Built on top of the Gemma 2 foundation, it is available in three sizes — 2B, 9B, and 27B — and has been refined using specialized datasets focused on medicine and drug development.
These models are crafted to handle information about small molecules, proteins, nucleic acids, diseases, and cell lines. They are especially useful for predicting molecular properties and helping scientists explore new treatment possibilities. While the smaller version is ideal for direct predictions, the larger ones can also handle more flexible, back-and-forth interactions — like explaining the reasons behind certain predictions.
Key Highlights
- Wide Use Cases: Performs well across many types of drug-related tasks.
- Efficient: Works effectively even with less data.
- Interactive Versions: Larger versions support more flexible responses and explanations.
- Custom Ready: Can be adapted for deeper, specific research needs.
Model Resource
Hugging Face
Link: https://huggingface.co/google/txgemma-2b-predict
TxGemma-2B – Recommended GPU Configuration
Component | Recommended Minimum |
---|
GPU | 1 × NVIDIA A10 / A100 / RTX A6000 / L4 / T4 / V100 |
vRAM (GPU RAM) | 16 GB or more |
CPU | 8 vCPUs |
RAM | 32 GB |
Storage (Disk) | 80–120 GB SSD |
OS | Ubuntu 20.04 or 22.04 |
Framework | Transformers (🤗), Accelerate |
Step-by-Step Process to Install Google’s TxGemma Locally
For the purpose of this tutorial, we will use a GPU-powered Virtual Machine offered by NodeShift; however, you can replicate the same steps with any other cloud provider of your choice. NodeShift provides the most affordable Virtual Machines at a scale that meets GDPR, SOC2, and ISO27001 requirements.
Step 1: Access model from Hugging Face
Link: https://huggingface.co/google/txgemma-2b-predict
You need to acknowledge the license to access this model. Click on the “Acknowledge License” button and wait for approval from Hugging Face and Google to gain access and use the model.
Step 2: Sign Up and Set Up a NodeShift Cloud Account
Visit the NodeShift Platform and create an account. Once you’ve signed up, log into your account.
Follow the account setup process and provide the necessary details and information.
Step 3: Create a GPU Node (Virtual Machine)
GPU Nodes are NodeShift’s GPU Virtual Machines, on-demand resources equipped with diverse GPUs ranging from H100s to A100s. These GPU-powered VMs provide enhanced environmental control, allowing configuration adjustments for GPUs, CPUs, RAM, and Storage based on specific requirements.
Navigate to the menu on the left side. Select the GPU Nodes option, create a GPU Node in the Dashboard, click the Create GPU Node button, and create your first Virtual Machine deploy
Step 4: Select a Model, Region, and Storage
In the “GPU Nodes” tab, select a GPU Model and Storage according to your needs and the geographical region where you want to launch your model.
We will use 1 x RTX A6000 GPU for this tutorial to achieve the fastest performance. However, you can choose a more affordable GPU with less VRAM if that better suits your requirements.
Step 5: Select Authentication Method
There are two authentication methods available: Password and SSH Key. SSH keys are a more secure option. To create them, please refer to our official documentation.
Step 6: Choose an Image
Next, you will need to choose an image for your Virtual Machine. We will deploy Google’s TxGemma on a Jupyter Virtual Machine. This open-source platform will allow you to install and run the Google’s TxGemma on your GPU node. By running this Model on a Jupyter Notebook, we avoid using the terminal, simplifying the process and reducing the setup time. This allows you to configure the model in just a few steps and minutes.
Note: NodeShift provides multiple image template options, such as TensorFlow, PyTorch, NVIDIA CUDA, Deepo, Whisper ASR Webservice, and Jupyter Notebook. With these options, you don’t need to install additional libraries or packages to run Jupyter Notebook. You can start Jupyter Notebook in just a few simple clicks.
After choosing the image, click the ‘Create’ button, and your Virtual Machine will be deployed.
Step 7: Virtual Machine Successfully Deployed
You will get visual confirmation that your node is up and running.
Step 8: Connect to Jupyter Notebook
Once your GPU VM deployment is successfully created and has reached the ‘RUNNING’ status, you can navigate to the page of your GPU Deployment Instance. Then, click the ‘Connect’ Button in the top right corner.
After clicking the ‘Connect’ button, you can view the Jupyter Notebook.
Now open Python 3(pykernel) Notebook.
Step 9: Authenticate Hugging Face and Install HuggingFace Hub
Run the following commands to authenticate hugging face and install huggingface hub:
!pip install huggingface_hub --upgrade
from huggingface_hub import login
login()
Paste your Hugging Face token when prompted.
Step 10: Install Required Libraries
Run the following commands to install the required libraries:
pip install transformers accelerate huggingface_hub
Step 11: Download Prompt Templates
Run the following code to download prompt templates:
from huggingface_hub import hf_hub_download
import json
tdc_prompts_filepath = hf_hub_download(
repo_id="google/txgemma-2b-predict",
filename="tdc_prompts.json"
)
with open(tdc_prompts_filepath, "r") as f:
tdc_prompts_json = json.load(f)
# Example Task
task_name = "BBB_Martins"
input_type = "{Drug SMILES}"
drug_smiles = "CN1C(=O)CN=C(C2=CCCCC2)c2cc(Cl)ccc21"
TDC_PROMPT = tdc_prompts_json[task_name].replace(input_type, drug_smiles)
print(TDC_PROMPT)
Step 12: Execute Python Script to Download and Run the Model on the Prompt
This Script:
- Loads the tokenizer and model from Hugging Face
- Sends the prompt to the model
- Returns the prediction
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Prompt you got from the json file
prompt = """Instructions: Answer the following question about drug properties.
Context: As a membrane separating circulating blood and brain extracellular fluid, the blood-brain barrier (BBB) is the protection layer that blocks most foreign drugs. Thus the ability of a drug to penetrate the barrier to deliver to the site of action forms a crucial challenge in development of drugs for central nervous system.
Question: Given a drug SMILES string, predict whether it
(A) does not cross the BBB (B) crosses the BBB
Drug SMILES: CN1C(=O)CN=C(C2=CCCCC2)c2cc(Cl)ccc21
Answer:"""
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("google/txgemma-2b-predict")
model = AutoModelForCausalLM.from_pretrained(
"google/txgemma-2b-predict",
device_map="auto"
)
# Tokenize input
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Run inference
outputs = model.generate(**inputs, max_new_tokens=8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Step 13: Run New Prompt (TDC Task: HIV
)
Run the following new prompt for HIV task:
task_name = "HIV"
input_type = "{Drug SMILES}"
drug_smiles = "COC1=CC=CC=C1C(=O)NC2=CC=CC=C2"
# Build prompt
TDC_PROMPT = f"""Instructions: Predict whether the compound is active or inactive against HIV replication.
Context: HIV (Human Immunodeficiency Virus) is the causative agent of AIDS, and drug discovery efforts are focused on identifying small molecules that can inhibit its replication. The ability to accurately predict activity based on SMILES strings is crucial for efficient screening.
Question: Given a drug SMILES string, classify it as:
(A) Inactive
(B) Active
Drug SMILES: {drug_smiles}
Answer:"""
print(TDC_PROMPT)
Step 14: Run Full Python Code (HIV Replication Task)
Run the following full Python code for (HIV Replication Task):
from transformers import AutoTokenizer, AutoModelForCausalLM
prompt = """Instructions: Predict whether the compound is active or inactive against HIV replication.
Context: HIV (Human Immunodeficiency Virus) is the causative agent of AIDS, and drug discovery efforts are focused on identifying small molecules that can inhibit its replication. The ability to accurately predict activity based on SMILES strings is crucial for efficient screening.
Question: Given a drug SMILES string, classify it as:
(A) Inactive
(B) Active
Drug SMILES: COC1=CC=CC=C1C(=O)NC2=CC=CC=C2
Answer:"""
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("google/txgemma-2b-predict")
model = AutoModelForCausalLM.from_pretrained("google/txgemma-2b-predict", device_map="auto")
# Tokenize and run inference
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=8)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
You’ll get a response like:
Answer: (A)
or
Answer: (B)
Step 15: Install Gradio
Run the following command to install gradio:
pip install gradio transformers accelerate
Step 16: Run Gradio Code
Run the following gradio code:
import gradio as gr
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load once
tokenizer = AutoTokenizer.from_pretrained("google/txgemma-2b-predict")
model = AutoModelForCausalLM.from_pretrained("google/txgemma-2b-predict", device_map="auto")
# Format the prompt
def predict_hiv_activity(smiles: str):
prompt = f"""Instructions: Predict whether the compound is active or inactive against HIV replication.
Context: HIV (Human Immunodeficiency Virus) is the causative agent of AIDS, and drug discovery efforts are focused on identifying small molecules that can inhibit its replication. The ability to accurately predict activity based on SMILES strings is crucial for efficient screening.
Question: Given a drug SMILES string, classify it as:
(A) Inactive
(B) Active
Drug SMILES: {smiles}
Answer:"""
# Tokenize and run inference
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=8)
decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
# Extract answer from the generated text
answer_start = decoded_output.find("Answer:")
prediction = decoded_output[answer_start + len("Answer:"):].strip()
return prediction
# Gradio UI
demo = gr.Interface(
fn=predict_hiv_activity,
inputs=gr.Textbox(label="Drug SMILES String", placeholder="Enter SMILES here..."),
outputs=gr.Textbox(label="Prediction"),
title="TxGemma-2B HIV Activity Predictor",
description="Paste a SMILES string and get a prediction: (A) Inactive or (B) Active"
)
demo.launch()
Step 17: Access the Gradio Web App
Access the gradio web app on:
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://faf1e52f64e9618aff.gradio.live
Step 18: Paste a SMILES String
In the left box labeled “Drug SMILES String”, enter something like:
COC1=CC=CC=C1C(=O)NC2=CC=CC=C2
This one is from the HIV task and you’ve already seen it in action.
Click “Submit”
Once you hit Submit, the model will:
- Build the prompt using your SMILES
- Run it through TxGemma-2B
- Show the output prediction in the right box (Prediction)
You’ll see something like:
(B) Active
or
(A) Inactive
Conclusion
TxGemma-2B offers a powerful and efficient way to support early-stage therapeutic research. With its ability to understand molecular structures and predict their properties, it serves as a practical tool for scientists and researchers working on drug discovery. By combining it with a user-friendly interface like Gradio and running it on a GPU-powered environment, you can explore complex biomedical questions in just a few clicks.
Whether you’re screening compounds, analyzing properties, or building tools to support scientific workflows, TxGemma-2B makes it easier to move from questions to results — all in a matter of minutes.