AnnoPred#

In this notebook, we will use AnnoPred to calculate the PRS.

Note: AnnoPred requires Python 2.0.

Download the Repository#

Clone the AnnoPred repository using Git:

git clone https://github.com/yiminghu/AnnoPred.git

Also copy the files in the currenct working directory.

Annotation Data#

AnnoPred requires functional annotation information for the prediction and uses two GWAS datasets for related diseases.

To download the annotation data:

OR use the following Google Drive link. There are two files:

  • AnnoPred_ref.tar.gz

  • AnnoPred_ref1.0.tar.gz

Google Drive Link

OR

cd AnnoPred
wget http://genocanyon.med.yale.edu/AnnoPredFiles/AnnoPred_ref.tar.gz

Extract the downloaded file:

tar -zxvf AnnoPred_ref1.0.tar.gz

This step will generate a folder named ref containing functional annotations.

LDSC Installation#

AnnoPred also requires LDSC. Download it using:

cd AnnoPred
git clone https://github.com/bulik/ldsc

After cloning, you should have the following directory structure:

annopred     AnnoPred_ref1.0.tar.gz  ldsc     pipeline.sh  ref           split_cv.R
AnnoPred.py  doc                     LICENSE  README.md    results_cv.R  test_data

Once these steps are complete, copy the all files in AnnoPred folder to the working directory.

cd AnnoPred/

cp * ../

Open LDSC.config and paste the path to LDSC in it:

cat LDSC.config
# LDSCPath /data/ascher01/uqmmune1/BenchmarkingPGSTools/ldsc
OR
# LDSCPath workingdirectory/ldsc

AnnoPred Hyperparameters#

One can pass a custom LD radius. To speed up the process, users can provide the h2. If it is skipped, AnnoPred will calculate it by itself.

Command-Line Options#

Option

Description

-h, --help

Show this help message and exit.

--sumstats SUMSTATS

GWAS summary stats.

--ref_gt REF_GT

Reference genotype, plink bed format.

--val_gt VAL_GT

Validation genotype, plink bed format.

--N_sample N_SAMPLE

Sample size of GWAS training, for LDSC.

--annotation_flag ANNOTATION_FLAG

Annotation flag: Tier0, Tier1, Tier2, and Tier3.

--P P

Tuning parameter in (0,1], the proportion of causal SNPs.

--local_ld_prefix LOCAL_LD_PREFIX

A local LD file name prefix; will be created if not present.

--ld_radius LD_RADIUS

If not provided, will use the number of SNPs in common divided by 3000.

--user_h2 USER_H2

Path to per-SNP heritability. If not provided, will use LDSC with 53 baseline annotations, GenoCanyon, and GenoSkyline.

--temp_dir TEMP_DIR

Directory to output all temporary files. If not specified, will use the current directory.

--num_iter NUM_ITER

Number of iterations for MCMC, default to 60.

--coord_out COORD_OUT

Output H5 File for coord_genotypes.

--out OUT

Output filename prefix for AnnoPred.

GWAS file processing for AnnoPred#

AnnoPred will automatically convert the OR to log or or betas for the continous phenotype so we saved the file as contianing OR ratio only.

import os
import pandas as pd
import numpy as np
import sys

#filedirec = sys.argv[1]

filedirec = "SampleData1"
#filedirec = "asthma_19"
#filedirec = "migraine_0"

def check_phenotype_is_binary_or_continous(filedirec):
    # Read the processed quality controlled file for a phenotype
    df = pd.read_csv(filedirec+os.sep+filedirec+'_QC.fam',sep="\s+",header=None)
    column_values = df[5].unique()
 
    if len(set(column_values)) == 2:
        return "Binary"
    else:
        return "Continous"



# Read the GWAS file.
GWAS = filedirec + os.sep + filedirec+".gz"
df = pd.read_csv(GWAS,compression= "gzip",sep="\s+")

Numberofsamples = df["N"].mean()


 
if "BETA" in df.columns.to_list():
    # For Binary Phenotypes.
    df["OR"] = np.exp(df["BETA"])
    df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'OR', 'INFO', 'MAF']]

else:
    # For Binary Phenotype.
    df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'OR', 'INFO', 'MAF']]

column_mapping = {"CHR": "hg19chrc", "SNP": "snpid", "A1": "a1", "A2": "a2", "BP": "bp", "OR": "or", "P": "p"}
new_columns = ["hg19chrc", "snpid", "a1", "a2", "bp", "or", "p"]
transformed_df = df.rename(columns=column_mapping)[new_columns]
transformed_df['hg19chrc'] = transformed_df['hg19chrc'].apply(lambda x: "chr" + str(x))
print(transformed_df.head())
 

transformed_df.to_csv(filedirec + os.sep +filedirec+".AnnoPred",sep="\t",index=False)
print(transformed_df.head())
print("Length of DataFrame!",len(transformed_df))
  hg19chrc       snpid a1 a2      bp        or         p
0     chr1   rs3131962  A  G  756604  0.997887  0.483171
1     chr1  rs12562034  A  G  768448  1.000687  0.834808
2     chr1   rs4040617  G  A  779322  0.997604  0.428970
3     chr1  rs79373928  G  T  801536  1.002036  0.808999
4     chr1  rs11240779  G  A  808631  1.001308  0.590265
  hg19chrc       snpid a1 a2      bp        or         p
0     chr1   rs3131962  A  G  756604  0.997887  0.483171
1     chr1  rs12562034  A  G  768448  1.000687  0.834808
2     chr1   rs4040617  G  A  779322  0.997604  0.428970
3     chr1  rs79373928  G  T  801536  1.002036  0.808999
4     chr1  rs11240779  G  A  808631  1.001308  0.590265
('Length of DataFrame!', 499617)

Define Hyperparameters#

Define hyperparameters to be optimized and set initial values.

Extract Valid SNPs from Clumped File#

For Windows, download gwak, and for Linux, the awk command is sufficient. For Windows, GWAK is required. You can download it from here. Get it and place it in the same directory.

Execution Path#

At this stage, we have the genotype training data newtrainfilename = "train_data.QC" and genotype test data newtestfilename = "test_data.QC".

We modified the following variables:

  1. filedirec = "SampleData1" or filedirec = sys.argv[1]

  2. foldnumber = "0" or foldnumber = sys.argv[2] for HPC.

Only these two variables can be modified to execute the code for specific data and specific folds. Though the code can be executed separately for each fold on HPC and separately for each dataset, it is recommended to execute it for multiple diseases and one fold at a time. Here’s the corrected text in Markdown format:

P-values#

PRS calculation relies on P-values. SNPs with low P-values, indicating a high degree of association with a specific trait, are considered for calculation.

You can modify the code below to consider a specific set of P-values and save the file in the same format.

We considered the following parameters:

  • Minimum P-value: 1e-10

  • Maximum P-value: 1.0

  • Minimum exponent: 10 (Minimum P-value in exponent)

  • Number of intervals: 100 (Number of intervals to be considered)

The code generates an array of logarithmically spaced P-values:

import numpy as np
import os

minimumpvalue = 10  # Minimum exponent for P-values
numberofintervals = 100  # Number of intervals to be considered

allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced P-values

print("Minimum P-value:", allpvalues[0])
print("Maximum P-value:", allpvalues[-1])

count = 1
with open(os.path.join(folddirec, 'range_list'), 'w') as file:
    for value in allpvalues:
        file.write(f'pv_{value} 0 {value}\n')  # Writing range information to the 'range_list' file
        count += 1

pvaluefile = os.path.join(folddirec, 'range_list')

In this code:

  • minimumpvalue defines the minimum exponent for P-values.

  • numberofintervals specifies how many intervals to consider.

  • allpvalues generates an array of P-values spaced logarithmically.

  • The script writes these P-values to a file named range_list in the specified directory.

from operator import index
import pandas as pd
import numpy as np
import os
import subprocess
import sys
import pandas as pd
import statsmodels.api as sm
import pandas as pd
from sklearn.metrics import roc_auc_score, confusion_matrix
from statsmodels.stats.contingency_tables import mcnemar

def create_directory(directory):
    """Function to create a directory if it doesn't exist."""
    if not os.path.exists(directory):  # Checking if the directory doesn't exist
        os.makedirs(directory)  # Creating the directory if it doesn't exist
    return directory  # Returning the created or existing directory

 
#foldnumber = sys.argv[1]
foldnumber = "0"  # Setting 'foldnumber' to "0"

folddirec = filedirec + os.sep + "Fold_" + foldnumber  # Creating a directory path for the specific fold
trainfilename = "train_data"  # Setting the name of the training data file
newtrainfilename = "train_data.QC"  # Setting the name of the new training data file

testfilename = "test_data"  # Setting the name of the test data file
newtestfilename = "test_data.QC"  # Setting the name of the new test data file

# Number of PCA to be included as a covariate.
numberofpca = ["6"]  # Setting the number of PCA components to be included

# Clumping parameters.
clump_p1 = [1]  # List containing clump parameter 'p1'
clump_r2 = [0.1]  # List containing clump parameter 'r2'
clump_kb = [200]  # List containing clump parameter 'kb'

# Pruning parameters.
p_window_size = [200]  # List containing pruning parameter 'window_size'
p_slide_size = [50]  # List containing pruning parameter 'slide_size'
p_LD_threshold = [0.25]  # List containing pruning parameter 'LD_threshold'

# Kindly note that the number of p-values to be considered varies, and the actual p-value depends on the dataset as well.
# We will specify the range list here.
#folddirec = "/path/to/your/folder"  # Replace with your actual folder path
from decimal import Decimal, getcontext
import numpy as np

# Set precision to a high value (e.g., 50)
getcontext().prec = 50
minimumpvalue = 10  # Minimum p-value in exponent
numberofintervals = 20  # Number of intervals to be considered
allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced p-values
count = 1
with open(os.path.join(folddirec, 'range_list'), 'w') as file:
    for value in allpvalues:
        
        file.write('pv_{} 0 {}\n'.format(value, value))  # Writing range information to the 'range_list' file
        count = count + 1

pvaluefile = folddirec + os.sep + 'range_list'

# Initializing an empty DataFrame with specified column names
prs_result = pd.DataFrame(columns=["clump_p1", "clump_r2", "clump_kb", "p_window_size", "p_slide_size", "p_LD_threshold",
                                   "pvalue","datafile", "numberofpca","Train_pure_prs", "Train_null_model", "Train_best_model",
                                   "Test_pure_prs", "Test_null_model", "Test_best_model"])

Define Helper Functions#

  1. Perform Clumping and Pruning

  2. Calculate PCA Using Plink

  3. Fit Binary Phenotype and Save Results

  4. Fit Continuous Phenotype and Save Results

import os
import subprocess
import pandas as pd
import statsmodels.api as sm
from sklearn.metrics import explained_variance_score
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import explained_variance_score

def perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,numberofpca, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    
    command = [
    "./plink",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.call(command)
    # First perform pruning and then clumping and the pruning.

    command = [
    "./plink",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--clump-p1", c1_val,
    "--extract", traindirec+os.sep+trainfilename+".prune.in",
    "--clump-r2", c2_val,
    "--clump-kb", c3_val,
    "--clump", filedirec+os.sep+filedirec+".txt",
    "--clump-snp-field", "SNP",
    "--clump-field", "P",
    "--out", traindirec+os.sep+trainfilename
    ]    
    subprocess.call(command)

    # Extract the valid SNPs from th clumped file.
    # For windows download gwak for linux awk commmand is sufficient.
    ### For windows require GWAK.
    ### https://sourceforge.net/projects/gnuwin32/
    ##3 Get it and place it in the same direc.
    #os.system("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")
    #print("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")

    #Linux:
    #Linux:
    command = "awk 'NR!=1{{print $3}}' {}{}{}.clumped > {}{}{}.valid.snp".format(
        traindirec, os.sep, trainfilename, 
        traindirec, os.sep, trainfilename
    )
 
    os.system(command)
    
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+newtrainfilename+".clumped.pruned"
    ]
    subprocess.call(command)
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+testfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+testfilename+".clumped.pruned"
    ]
    subprocess.call(command)    
    
    
 
def calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p):
    
    # Calculate the PRS for the test data using the same set of SNPs and also calculate the PCA.


    # Also extract the PCA at this point.
    # PCA are calculated afer clumping and pruining.
    command = [
        "./plink",
        "--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", folddirec+os.sep+testfilename
    ]
    subprocess.call(command)


    command = [
    "./plink",
        "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.        
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.call(command)

# This function fit the binary model on the PRS.
def fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p, t,pp,datafile, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues

    # Merge the covariates, PCA and phenotypes.
    tempphenotype_train = pd.read_table(os.path.join(traindirec, newtrainfilename + ".clumped.pruned.fam"), sep="\s+", header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(os.path.join(traindirec, trainfilename + ".eigenvec"), sep="\s+", header=None, names=["FID", "IID"] + ["PC" + str(i) for i in range(1, int(p) + 1)])
    covariate_train = pd.read_table(os.path.join(traindirec, trainfilename + ".cov"), sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID", "IID"])
    covandpcs_train.fillna(0, inplace=True)

    # Scale the covariates
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])

    tempphenotype_test = pd.read_table(os.path.join(traindirec, testfilename + ".clumped.pruned.fam"), sep="\s+", header=None)
    phenotype_test = pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(os.path.join(traindirec, testfilename + ".eigenvec"), sep="\s+", header=None, names=["FID", "IID"] + ["PC" + str(i) for i in range(1, int(p) + 1)])
    covariate_test = pd.read_table(os.path.join(traindirec, testfilename + ".cov"), sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID", "IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test = scaler.transform(covandpcs_test.iloc[:, 2:])
    

    tempalphas = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
    l1weights = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1})
    phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})

    for tempalpha in tempalphas:
        for l1weight in l1weights:
            try:
                null_model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
            except:
                print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))

            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))

            global prs_result
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(
                        traindirec + os.sep + Name + os.sep + "train_data.pv_{}.profile".format(i),
                        sep="\s+",
                        usecols=["FID", "IID", "SCORE"]
                    )
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(
                        traindirec + os.sep + Name + os.sep + "test_data.pv_{}.profile".format(i),
                        sep="\s+",
                        usecols=["FID", "IID", "SCORE"]
                    )
                
                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])

                try:
                    model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                except:
                    continue

                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))
                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:]))

                prs_result = prs_result.append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca": p,

                    "tempalpha": str(tempalpha),
                    "l1weight": str(l1weight),
                    
                    "Tier":t,
                    "pvalue_AnnoPred":pp,
                    "datafile":datafile,
                    
                    "Train_pure_prs": roc_auc_score(phenotype_train["Phenotype"].values, prs_train['SCORE'].values),
                    "Train_null_model": roc_auc_score(phenotype_train["Phenotype"].values, train_null_predicted),
                    "Train_best_model": roc_auc_score(phenotype_train["Phenotype"].values, train_best_predicted),

                    "Test_pure_prs": roc_auc_score(phenotype_test["Phenotype"].values, prs_test['SCORE'].values),
                    "Test_null_model": roc_auc_score(phenotype_test["Phenotype"].values, test_null_predicted),
                    "Test_best_model": roc_auc_score(phenotype_test["Phenotype"].values, test_best_predicted),

                }, ignore_index=True)

                prs_result.to_csv(os.path.join(traindirec, Name, "Results.csv"), index=False)

    return

# This function fits the continuous model on the PRS.
def fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p, t,pp,datafile, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues
    from sklearn.preprocessing import MinMaxScaler
    # Merge the covariates, PCA and phenotypes.
    tempphenotype_train = pd.read_table(os.path.join(traindirec, newtrainfilename + ".clumped.pruned.fam"), sep="\s+", header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(os.path.join(traindirec, trainfilename + ".eigenvec"),sep="\s+", header=None,names=["FID", "IID"] + ["PC{}".format(i) for i in range(1, int(p) + 1)])

    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train = pd.read_table(os.path.join(traindirec, trainfilename + ".cov"), sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID", "IID"])
    covandpcs_train.fillna(0, inplace=True)

    # Scale the covariates
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])

    tempphenotype_test = pd.read_table(os.path.join(traindirec, testfilename + ".clumped.pruned.fam"), sep="\s+", header=None)
    phenotype_test = pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(os.path.join(traindirec, testfilename + ".eigenvec"),sep="\s+", header=None,names=["FID", "IID"] + ["PC{}".format(i) for i in range(1, int(p) + 1)])
    covariate_test = pd.read_table(os.path.join(traindirec, testfilename + ".cov"), sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID", "IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test = scaler.transform(covandpcs_test.iloc[:, 2:])
    
    from sklearn.preprocessing import MinMaxScaler

    from sklearn.metrics import explained_variance_score

    tempalphas = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]
    l1weights = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    for tempalpha in tempalphas:
        for l1weight in l1weights:
            try:
                #null_model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                null_model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
            
            except:
                print "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))

            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))

            global prs_result
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(
                        traindirec + os.sep + Name + os.sep + "train_data.pv_{}.profile".format(i),
                        sep="\s+",
                        usecols=["FID", "IID", "SCORE"]
                    )
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(
                        traindirec + os.sep + Name + os.sep + "test_data.pv_{}.profile".format(i),
                        sep="\s+",
                        usecols=["FID", "IID", "SCORE"]
                    )

                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])

                try:
                    #model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                    model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
                
                except:
                    continue

                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))
                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:]))

                prs_result = prs_result.append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca": p,
                    
                    "Tier":t,
                    "pvalue_AnnoPred":pp,
                    "datafile":datafile,
                    
                    "tempalpha": str(tempalpha),
                    "l1weight": str(l1weight),
                    
                    "Train_pure_prs": explained_variance_score(phenotype_train["Phenotype"].values, prs_train['SCORE'].values),
                    "Train_null_model": explained_variance_score(phenotype_train["Phenotype"].values, train_null_predicted),
                    "Train_best_model": explained_variance_score(phenotype_train["Phenotype"].values, train_best_predicted),

                    "Test_pure_prs": explained_variance_score(phenotype_test["Phenotype"].values, prs_test['SCORE'].values),
                    "Test_null_model": explained_variance_score(phenotype_test["Phenotype"].values, test_null_predicted),
                    "Test_best_model": explained_variance_score(phenotype_test["Phenotype"].values, test_best_predicted),

                }, ignore_index=True)

                prs_result.to_csv(os.path.join(traindirec, Name, "Results.csv"), index=False)

    return

Execute AnnoPred#

import os
import subprocess
import pandas as pd
import statsmodels.api as sm
from sklearn.metrics import explained_variance_score
from sklearn.preprocessing import MinMaxScaler

def transform_annopred_data(traindirec, newtrainfilename,numberofpca, tier,pvalue,p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    import shutil
    import os

    def remove_all_in_directory(directory_path):
        if not os.path.exists(directory_path):
            print "The directory {} does not exist.".format(directory_path)
            return

        for item in os.listdir(directory_path):
            item_path = os.path.join(directory_path, item)

            try:
                if os.path.isfile(item_path):
                    os.remove(item_path)
                elif os.path.isdir(item_path):
                    shutil.rmtree(item_path)
            except Exception as e:
                print "Failed to remove {}. Reason: {}".format(item_path, e)

        print "All files and directories in {} have been removed.".format(directory_path)


    ### First perform clumping on the file and save the clumpled file.
    #perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,p, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
    
    #newtrainfilename = newtrainfilename+".clumped.pruned"
    #testfilename = testfilename+".clumped.pruned"
    
    
    #clupmedfile = traindirec+os.sep+newtrainfilename+".clump"
    #prunedfile = traindirec+os.sep+newtrainfilename+".clumped.pruned"

        
    # Also extract the PCA at this point for both test and training data.
    #calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p)

    #Extract p-values from the GWAS file.
    # Command for Linux.
    os.system("awk "+"\'"+"{print $3,$8}"+"\'"+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")
  
    # Remove the all files in the specific directory.
    remove_all_in_directory(traindirec+os.sep+"AnnoPred_test_output")
    remove_all_in_directory(traindirec+os.sep+"AnnoPred_tmp_test") 

    create_directory(traindirec+os.sep+"AnnoPred_test_output")
    create_directory(traindirec+os.sep+"AnnoPred_tmp_test")
 

    ## AnnoPred overrides the file in the ref directory
    ## So, we file calculate the hertiability using LDSC and calculate LDSC_file for each tire tire0_ldsc_results
    ## And passed to AnnoPred.
    
    munge_command = [
        './munge_sumstats.py',
        '--out', traindirec+os.sep+"AnnoPred_tmp_test"+os.sep+"Curated_GWAS",
        '--merge-alleles', '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Misc/w_hm3.snplist',
        '--N', str(Numberofsamples),
        '--sumstats', filedirec+os.sep+filedirec+'.AnnoPred'
    ]
    
    subprocess.call(munge_command)
    # Step 2: Run ldsc.py
    ldsc_command = [
        './ldsc.py',
        '--h2', traindirec+os.sep+"AnnoPred_tmp_test"+os.sep+"Curated_GWAS.sumstats.gz",
        '--ref-ld-chr', '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/Baseline/baseline.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoCanyon/GenoCanyon_Func.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Brain.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/GI.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Lung.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Heart.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Blood.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Muscle.,'
                       '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Annotations/GenoSkyline/Epithelial.',
        '--out', traindirec+os.sep+'AnnoPred_tmp_test/tier0_ldsc',
        '--overlap-annot',
        # This is the AnnoPred reference set
        '--frqfile-chr', '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Misc/1000G.mac5eur.',
        '--w-ld-chr', '/data/ascher01/uqmmune1/BenchmarkingPGSTools/ref/Misc/weights.',
        '--print-coefficients'
    ]
    subprocess.call(ldsc_command)    
    
    
 
    
    command = [
        "python",
        "AnnoPred.py",
        "--sumstats",filedirec + os.sep + filedirec+".AnnoPred",
        "--ref_gt",traindirec+os.sep+newtrainfilename+".clumped.pruned",
        "--val_gt",traindirec+os.sep+newtrainfilename+".clumped.pruned",
        "--coord_out",traindirec+os.sep+"AnnoPred_test_output"+os.sep+"coord_out",
    
        "--N_sample",str(int(Numberofsamples)),
        "--annotation_flag",tier,
        "--P",str(pvalue),
        "--local_ld_prefix",traindirec+os.sep+"AnnoPred_tmp_test"+os.sep+"local_ld",
        "--out",traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test",
        "--temp_dir",traindirec+os.sep+"AnnoPred_tmp_test"
    ]
    print(" ".join(command))
    subprocess.call(command)        
 
    
    data1 = traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test_h2_inf_betas_"+str(pvalue)+".txt"
    data2 = traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test_h2_non_inf_betas_"+str(pvalue)+".txt"
    data3 = traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test_pT_inf_betas_"+str(pvalue)+".txt"
    data4 = traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test_pT_non_inf_betas_"+str(pvalue)+".txt"
 
    
    datafiles = [data1,data2,data3,data4]
    for datafile in datafiles: 
        # Calculate Plink Score.
        try:
            tempgwas = pd.read_csv(traindirec+os.sep+"AnnoPred_test_output"+os.sep+"test_h2_inf_betas_"+str(pvalue)+".txt",sep="\s+" )
        except:
            print("GWAS not generated!")
            return
        
        
        if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
            tempgwas["AnnoPred_inf_beta"] = np.exp(tempgwas["AnnoPred_inf_beta"])
        else:
             pass
            

        tempgwas = tempgwas.rename(columns={"sid": "SNP", "nt1": "A1", "AnnoPred_inf_beta": "BETA"})
        tempgwas[["SNP","A1","BETA"]].to_csv(traindirec+os.sep+"AnnoPred_GWAS.txt",sep="\t",index=False)        
        
        
        command = [
            "./plink",
             "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
            ### SNP column = 3, Effect allele column 1 = 4, OR column=9
            "--score", traindirec+os.sep+"AnnoPred_GWAS.txt", "1", "2", "3", "header",
            "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
            "--extract", traindirec+os.sep+trainfilename+".valid.snp",
            "--out", traindirec+os.sep+Name+os.sep+trainfilename
        ]
 
        subprocess.call(command)
 
        # Calculate the PRS for the test data using the same set of SNPs and also calculate the PCA.
    
 
    
        command = [
            "./plink",
            "--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
            ### SNP column = 3, Effect allele column 1 = 4, OR column=9
            "--score", traindirec+os.sep+"AnnoPred_GWAS.txt", "1", "2", "3", "header",
            "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
            "--extract", traindirec+os.sep+trainfilename+".valid.snp",
            "--out", folddirec+os.sep+Name+os.sep+testfilename
        ]
        subprocess.call(command)
    
    
        if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
            print("Binary Phenotype!")
            fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,t,pvalue,os.path.basename(datafile), p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
        else:
            print("Continous Phenotype!")
            fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p, t,pvalue,os.path.basename(datafile), p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)

 
   
    
# AnnoPred offers 4 tires of calculating the P 
# tier0: baseline + GenoCanyon + 7 GenoSkyline (Brain, GI, Lung, Heart, Blood, Muscle, Epithelial)
# tier1: baseline + GenoCanyon
# tier2: baseline + GenoCanyon + 7 GenoSkyline_Plus (Immune, Brain, CV, Muscle, GI, Epithelial)
# tier3: baseline + GenoCanyon + 66 GenoSkyline
 

tires = ['tier0','tier1','tier2','tier3']
tires = ['tier0']
tempallpvalues = [allpvalues[-1]]
result_directory = "AnnoPred"
# Nested loops to iterate over different parameter values
create_directory(folddirec+os.sep+result_directory)
for p1_val in p_window_size:
 for p2_val in p_slide_size:
  for p3_val in p_LD_threshold:
   for c1_val in clump_p1:
    for c2_val in clump_r2:
     for c3_val in clump_kb:
      for p in numberofpca:
       for t in tires:
        for pvalue in tempallpvalues:
         transform_annopred_data(folddirec, newtrainfilename, p,t,pvalue, str(p1_val), str(p2_val), str(p3_val), str(c1_val), str(c2_val), str(c3_val), result_directory, pvaluefile)
python AnnoPred.py --sumstats SampleData1/SampleData1.AnnoPred --ref_gt SampleData1/Fold_0/train_data.QC.clumped.pruned --val_gt SampleData1/Fold_0/train_data.QC.clumped.pruned --coord_out SampleData1/Fold_0/AnnoPred_test_output/coord_out --N_sample 388028 --annotation_flag tier0 --P 1.0 --local_ld_prefix SampleData1/Fold_0/AnnoPred_tmp_test/local_ld --out SampleData1/Fold_0/AnnoPred_test_output/test --temp_dir SampleData1/Fold_0/AnnoPred_tmp_test
Continous Phenotype!
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:233: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:236: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:238: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:239: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:253: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:256: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:257: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:298: FutureWarning: read_table is deprecated, use read_csv instead.
/data/ascher01/uqmmune1/miniconda3/envs/ldscc/lib/python2.7/site-packages/ipykernel_launcher.py:309: FutureWarning: read_table is deprecated, use read_csv instead.
Continous Phenotype!
Continous Phenotype!
Continous Phenotype!

Repeat the process for each fold.#

Change the foldnumber variable.

#foldnumber = sys.argv[1]
foldnumber = "0"  # Setting 'foldnumber' to "0"

Or uncomment the following line:

# foldnumber = sys.argv[1]
python AnnoPredCode.py 0
python AnnoPredCode.py 1
python AnnoPredCode.py 2
python AnnoPredCode.py 3
python AnnoPredCode.py 4

The following files should exist after the execution:

  1. SampleData1/Fold_0/AnnoPred/Results.csv

  2. SampleData1/Fold_1/AnnoPred/Results.csv

  3. SampleData1/Fold_2/AnnoPred/Results.csv

  4. SampleData1/Fold_3/AnnoPred/Results.csv

  5. SampleData1/Fold_4/AnnoPred/Results.csv

Check the results file for each fold.#

import os
import pandas as pd

  
# List of file names to check for existence
f = [
    "./"+filedirec+"/Fold_0"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_1"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_2"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_3"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_4"+os.sep+result_directory+"Results.csv",
]

 

# Loop through each file name in the list
for loop in range(0,5):
    # Check if the file exists in the specified directory for the given fold
    if os.path.exists(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv"):
        temp = pd.read_csv(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv")
        print("Fold_",loop, "Yes, the file exists.")
        #print(temp.head())
        print("Number of P-values processed: ",len(temp))
        # Print a message indicating that the file exists
    
    else:
        # Print a message indicating that the file does not exist
        print("Fold_",loop, "No, the file does not exist.")
('Fold_', 0, 'Yes, the file exists.')
('Number of P-values processed: ', 80)
('Fold_', 1, 'Yes, the file exists.')
('Number of P-values processed: ', 80)
('Fold_', 2, 'Yes, the file exists.')
('Number of P-values processed: ', 80)
('Fold_', 3, 'Yes, the file exists.')
('Number of P-values processed: ', 80)
('Fold_', 4, 'Yes, the file exists.')
('Number of P-values processed: ', 80)

Sum the results for each fold.#

print("We have to ensure when we sum the entries across all Folds, the same rows are merged!")

def sum_and_average_columns(data_frames):
    """Sum and average numerical columns across multiple DataFrames, and keep non-numerical columns unchanged."""
    # Initialize DataFrame to store the summed results for numerical columns
    summed_df = pd.DataFrame()
    non_numerical_df = pd.DataFrame()
    
    for df in data_frames:
        # Identify numerical and non-numerical columns
        numerical_cols = df.select_dtypes(include=[np.number]).columns
        non_numerical_cols = df.select_dtypes(exclude=[np.number]).columns
        
        # Sum numerical columns
        if summed_df.empty:
            summed_df = pd.DataFrame(0, index=range(len(df)), columns=numerical_cols)
        
        summed_df[numerical_cols] = summed_df[numerical_cols].add(df[numerical_cols], fill_value=0)
        
        # Keep non-numerical columns (take the first non-numerical entry for each column)
        if non_numerical_df.empty:
            non_numerical_df = df[non_numerical_cols]
        else:
            non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
    
    # Divide the summed values by the number of dataframes to get the average
    averaged_df = summed_df / len(data_frames)
    
    # Combine numerical and non-numerical DataFrames
    result_df = pd.concat([averaged_df, non_numerical_df], axis=1)
    
    return result_df

from functools import reduce
import numpy as np
import os
import pandas as pd
from functools import reduce
def dataframe_to_markdown(df):
    # Create the header
    header = "| " + " | ".join(df.columns) + " |"
    separator = "| " + " | ".join(['---'] * len(df.columns)) + " |"
    
    # Create the rows
    rows = []
    for index, row in df.iterrows():
        row_string = "| " + " | ".join([str(item) for item in row]) + " |"
        rows.append(row_string)
    
    # Combine all parts into the final markdown string
    markdown = header + "\n" + separator + "\n" + "\n".join(rows)
    return markdown

def find_common_rows(allfoldsframe):
    # Define the performance columns that need to be excluded
    performance_columns = [
        'Train_null_model', 'Train_pure_prs', 'Train_best_model',
        'Test_pure_prs', 'Test_null_model', 'Test_best_model'
    ]
    
    important_columns = [
        'clump_p1',
        'clump_r2',
        'clump_kb',
        'p_window_size',
        'p_slide_size',
        'p_LD_threshold',
        'pvalue',
        'referencepanel',
        'PRSice-2_Model',
        'effectsizes',
        'h2model',
                    
        "Tier",
        "pvalue_AnnoPred",
        "datafile",
              
    ]
    # Function to remove performance columns from a DataFrame
    def drop_performance_columns(df):
        return df.drop(columns=performance_columns, errors='ignore')
    
    def get_important_columns(df ):
        existing_columns = [col for col in important_columns if col in df.columns]
        if existing_columns:
            return df[existing_columns].copy()
        else:
            return pd.DataFrame()

    # Drop performance columns from all DataFrames in the list
    allfoldsframe_dropped = [drop_performance_columns(df) for df in allfoldsframe]
    
    # Get the important columns.
    allfoldsframe_dropped = [get_important_columns(df) for df in allfoldsframe_dropped]    
    
    common_rows = allfoldsframe_dropped[0]
    print(dataframe_to_markdown(common_rows.head()))
    
    
    
    for i in range(1, len(allfoldsframe_dropped)):
        # Get the next DataFrame
        next_df = allfoldsframe_dropped[i]

        # Count unique rows in the current DataFrame and the next DataFrame
        unique_in_common = common_rows.shape[0]
        unique_in_next = next_df.shape[0]

        # Find common rows between the current common_rows and the next DataFrame
        common_rows = pd.merge(common_rows, next_df, how='inner')
    
        # Count the common rows after merging
        common_count = common_rows.shape[0]
        print(dataframe_to_markdown(common_rows.head()))
    
        # Print the unique and common row counts
        print("Iteration {}:".format(i))
        print("Unique rows in current common DataFrame: {}".format(unique_in_common))
        print("Unique rows in next DataFrame: {}".format(unique_in_next))
        print("Common rows after merge: {}\n".format(common_count))
    
    # Now that we have the common rows, extract these from the original DataFrames
    extracted_common_rows_frames = []
    for original_df in allfoldsframe:
        # Merge the common rows with the original DataFrame, keeping only the rows that match the common rows
        extracted_common_rows = pd.merge(common_rows, original_df, how='inner', on=common_rows.columns.tolist())
        
        # Add the DataFrame with the extracted common rows to the list
        extracted_common_rows_frames.append(extracted_common_rows)

    # Print the number of rows in the common DataFrames
    for i, df in enumerate(extracted_common_rows_frames):
        print("DataFrame {} with extracted common rows has {} rows.".format(i + 1, df.shape[0]))

    # Return the list of DataFrames with extracted common rows
    return extracted_common_rows_frames


# Example usage (assuming allfoldsframe is populated as shown earlier):
allfoldsframe = []

# Loop through each file name in the list
for loop in range(0, 5):
    # Check if the file exists in the specified directory for the given fold
    file_path = os.path.join(filedirec, "Fold_" + str(loop), result_directory, "Results.csv")
    if os.path.exists(file_path):
        allfoldsframe.append(pd.read_csv(file_path))
        # Print a message indicating that the file exists
        print("Fold_", loop, "Yes, the file exists.")
    else:
        # Print a message indicating that the file does not exist
        print("Fold_", loop, "No, the file does not exist.")

# Find the common rows across all folds and return the list of extracted common rows
extracted_common_rows_list = find_common_rows(allfoldsframe)
 
# Sum the values column-wise
# For string values, do not sum it the values are going to be the same for each fold.
# Only sum the numeric values.

divided_result = sum_and_average_columns(extracted_common_rows_list)
  
print(divided_result)

 
We have to ensure when we sum the entries across all Folds, the same rows are merged!
('Fold_', 0, 'Yes, the file exists.')
('Fold_', 1, 'Yes, the file exists.')
('Fold_', 2, 'Yes, the file exists.')
('Fold_', 3, 'Yes, the file exists.')
('Fold_', 4, 'Yes, the file exists.')
| clump_p1 | clump_r2 | clump_kb | p_window_size | p_slide_size | p_LD_threshold | pvalue | Tier | pvalue_AnnoPred | datafile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.35981828628e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.12883789168e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.79269019073e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.2742749857e-08 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| clump_p1 | clump_r2 | clump_kb | p_window_size | p_slide_size | p_LD_threshold | pvalue | Tier | pvalue_AnnoPred | datafile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.35981828628e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.12883789168e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.79269019073e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.2742749857e-08 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
Iteration 1:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

| clump_p1 | clump_r2 | clump_kb | p_window_size | p_slide_size | p_LD_threshold | pvalue | Tier | pvalue_AnnoPred | datafile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.35981828628e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.12883789168e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.79269019073e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.2742749857e-08 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
Iteration 2:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

| clump_p1 | clump_r2 | clump_kb | p_window_size | p_slide_size | p_LD_threshold | pvalue | Tier | pvalue_AnnoPred | datafile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.35981828628e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.12883789168e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.79269019073e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.2742749857e-08 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
Iteration 3:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

| clump_p1 | clump_r2 | clump_kb | p_window_size | p_slide_size | p_LD_threshold | pvalue | Tier | pvalue_AnnoPred | datafile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.35981828628e-10 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.12883789168e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 3.79269019073e-09 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
| 1 | 0.1 | 200 | 200 | 50 | 0.25 | 1.2742749857e-08 | tier0 | 1.0 | test_h2_inf_betas_1.0.txt |
Iteration 4:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

DataFrame 1 with extracted common rows has 80 rows.
DataFrame 2 with extracted common rows has 80 rows.
DataFrame 3 with extracted common rows has 80 rows.
DataFrame 4 with extracted common rows has 80 rows.
DataFrame 5 with extracted common rows has 80 rows.
    clump_p1  clump_r2  clump_kb  p_window_size  p_slide_size  p_LD_threshold  \
0        1.0       0.1     200.0          200.0          50.0            0.25   
1        1.0       0.1     200.0          200.0          50.0            0.25   
2        1.0       0.1     200.0          200.0          50.0            0.25   
3        1.0       0.1     200.0          200.0          50.0            0.25   
4        1.0       0.1     200.0          200.0          50.0            0.25   
5        1.0       0.1     200.0          200.0          50.0            0.25   
6        1.0       0.1     200.0          200.0          50.0            0.25   
7        1.0       0.1     200.0          200.0          50.0            0.25   
8        1.0       0.1     200.0          200.0          50.0            0.25   
9        1.0       0.1     200.0          200.0          50.0            0.25   
10       1.0       0.1     200.0          200.0          50.0            0.25   
11       1.0       0.1     200.0          200.0          50.0            0.25   
12       1.0       0.1     200.0          200.0          50.0            0.25   
13       1.0       0.1     200.0          200.0          50.0            0.25   
14       1.0       0.1     200.0          200.0          50.0            0.25   
15       1.0       0.1     200.0          200.0          50.0            0.25   
16       1.0       0.1     200.0          200.0          50.0            0.25   
17       1.0       0.1     200.0          200.0          50.0            0.25   
18       1.0       0.1     200.0          200.0          50.0            0.25   
19       1.0       0.1     200.0          200.0          50.0            0.25   
20       1.0       0.1     200.0          200.0          50.0            0.25   
21       1.0       0.1     200.0          200.0          50.0            0.25   
22       1.0       0.1     200.0          200.0          50.0            0.25   
23       1.0       0.1     200.0          200.0          50.0            0.25   
24       1.0       0.1     200.0          200.0          50.0            0.25   
25       1.0       0.1     200.0          200.0          50.0            0.25   
26       1.0       0.1     200.0          200.0          50.0            0.25   
27       1.0       0.1     200.0          200.0          50.0            0.25   
28       1.0       0.1     200.0          200.0          50.0            0.25   
29       1.0       0.1     200.0          200.0          50.0            0.25   
..       ...       ...       ...            ...           ...             ...   
50       1.0       0.1     200.0          200.0          50.0            0.25   
51       1.0       0.1     200.0          200.0          50.0            0.25   
52       1.0       0.1     200.0          200.0          50.0            0.25   
53       1.0       0.1     200.0          200.0          50.0            0.25   
54       1.0       0.1     200.0          200.0          50.0            0.25   
55       1.0       0.1     200.0          200.0          50.0            0.25   
56       1.0       0.1     200.0          200.0          50.0            0.25   
57       1.0       0.1     200.0          200.0          50.0            0.25   
58       1.0       0.1     200.0          200.0          50.0            0.25   
59       1.0       0.1     200.0          200.0          50.0            0.25   
60       1.0       0.1     200.0          200.0          50.0            0.25   
61       1.0       0.1     200.0          200.0          50.0            0.25   
62       1.0       0.1     200.0          200.0          50.0            0.25   
63       1.0       0.1     200.0          200.0          50.0            0.25   
64       1.0       0.1     200.0          200.0          50.0            0.25   
65       1.0       0.1     200.0          200.0          50.0            0.25   
66       1.0       0.1     200.0          200.0          50.0            0.25   
67       1.0       0.1     200.0          200.0          50.0            0.25   
68       1.0       0.1     200.0          200.0          50.0            0.25   
69       1.0       0.1     200.0          200.0          50.0            0.25   
70       1.0       0.1     200.0          200.0          50.0            0.25   
71       1.0       0.1     200.0          200.0          50.0            0.25   
72       1.0       0.1     200.0          200.0          50.0            0.25   
73       1.0       0.1     200.0          200.0          50.0            0.25   
74       1.0       0.1     200.0          200.0          50.0            0.25   
75       1.0       0.1     200.0          200.0          50.0            0.25   
76       1.0       0.1     200.0          200.0          50.0            0.25   
77       1.0       0.1     200.0          200.0          50.0            0.25   
78       1.0       0.1     200.0          200.0          50.0            0.25   
79       1.0       0.1     200.0          200.0          50.0            0.25   

          pvalue  pvalue_AnnoPred  numberofpca  Train_pure_prs  \
0   1.000000e-10              1.0          6.0        0.000017   
1   3.359818e-10              1.0          6.0        0.000018   
2   1.128838e-09              1.0          6.0        0.000027   
3   3.792690e-09              1.0          6.0        0.000032   
4   1.274275e-08              1.0          6.0        0.000029   
5   4.281332e-08              1.0          6.0        0.000023   
6   1.438450e-07              1.0          6.0        0.000024   
7   4.832930e-07              1.0          6.0        0.000018   
8   1.623777e-06              1.0          6.0        0.000019   
9   5.455595e-06              1.0          6.0        0.000019   
10  1.832981e-05              1.0          6.0        0.000021   
11  6.158482e-05              1.0          6.0        0.000019   
12  2.069138e-04              1.0          6.0        0.000016   
13  6.951928e-04              1.0          6.0        0.000013   
14  2.335721e-03              1.0          6.0        0.000011   
15  7.847600e-03              1.0          6.0        0.000009   
16  2.636651e-02              1.0          6.0        0.000007   
17  8.858668e-02              1.0          6.0        0.000004   
18  2.976351e-01              1.0          6.0        0.000003   
19  1.000000e+00              1.0          6.0        0.000001   
20  1.000000e-10              1.0          6.0        0.000017   
21  3.359818e-10              1.0          6.0        0.000018   
22  1.128838e-09              1.0          6.0        0.000027   
23  3.792690e-09              1.0          6.0        0.000032   
24  1.274275e-08              1.0          6.0        0.000029   
25  4.281332e-08              1.0          6.0        0.000023   
26  1.438450e-07              1.0          6.0        0.000024   
27  4.832930e-07              1.0          6.0        0.000018   
28  1.623777e-06              1.0          6.0        0.000019   
29  5.455595e-06              1.0          6.0        0.000019   
..           ...              ...          ...             ...   
50  1.832981e-05              1.0          6.0        0.000021   
51  6.158482e-05              1.0          6.0        0.000019   
52  2.069138e-04              1.0          6.0        0.000016   
53  6.951928e-04              1.0          6.0        0.000013   
54  2.335721e-03              1.0          6.0        0.000011   
55  7.847600e-03              1.0          6.0        0.000009   
56  2.636651e-02              1.0          6.0        0.000007   
57  8.858668e-02              1.0          6.0        0.000004   
58  2.976351e-01              1.0          6.0        0.000003   
59  1.000000e+00              1.0          6.0        0.000001   
60  1.000000e-10              1.0          6.0        0.000017   
61  3.359818e-10              1.0          6.0        0.000018   
62  1.128838e-09              1.0          6.0        0.000027   
63  3.792690e-09              1.0          6.0        0.000032   
64  1.274275e-08              1.0          6.0        0.000029   
65  4.281332e-08              1.0          6.0        0.000023   
66  1.438450e-07              1.0          6.0        0.000024   
67  4.832930e-07              1.0          6.0        0.000018   
68  1.623777e-06              1.0          6.0        0.000019   
69  5.455595e-06              1.0          6.0        0.000019   
70  1.832981e-05              1.0          6.0        0.000021   
71  6.158482e-05              1.0          6.0        0.000019   
72  2.069138e-04              1.0          6.0        0.000016   
73  6.951928e-04              1.0          6.0        0.000013   
74  2.335721e-03              1.0          6.0        0.000011   
75  7.847600e-03              1.0          6.0        0.000009   
76  2.636651e-02              1.0          6.0        0.000007   
77  8.858668e-02              1.0          6.0        0.000004   
78  2.976351e-01              1.0          6.0        0.000003   
79  1.000000e+00              1.0          6.0        0.000001   

    Train_null_model  Train_best_model  Test_pure_prs  Test_null_model  \
0            0.23001          0.232415       0.000009         0.118692   
1            0.23001          0.232588       0.000017         0.118692   
2            0.23001          0.236427       0.000031         0.118692   
3            0.23001          0.241038       0.000037         0.118692   
4            0.23001          0.243177       0.000033         0.118692   
5            0.23001          0.242658       0.000027         0.118692   
6            0.23001          0.246455       0.000029         0.118692   
7            0.23001          0.246647       0.000023         0.118692   
8            0.23001          0.253509       0.000024         0.118692   
9            0.23001          0.266624       0.000021         0.118692   
10           0.23001          0.291437       0.000023         0.118692   
11           0.23001          0.301691       0.000020         0.118692   
12           0.23001          0.311315       0.000017         0.118692   
13           0.23001          0.313856       0.000014         0.118692   
14           0.23001          0.322171       0.000011         0.118692   
15           0.23001          0.343210       0.000010         0.118692   
16           0.23001          0.360453       0.000008         0.118692   
17           0.23001          0.352818       0.000005         0.118692   
18           0.23001          0.361058       0.000003         0.118692   
19           0.23001          0.362624       0.000002         0.118692   
20           0.23001          0.232415       0.000009         0.118692   
21           0.23001          0.232588       0.000017         0.118692   
22           0.23001          0.236427       0.000031         0.118692   
23           0.23001          0.241038       0.000037         0.118692   
24           0.23001          0.243177       0.000033         0.118692   
25           0.23001          0.242658       0.000027         0.118692   
26           0.23001          0.246455       0.000029         0.118692   
27           0.23001          0.246647       0.000023         0.118692   
28           0.23001          0.253509       0.000024         0.118692   
29           0.23001          0.266624       0.000021         0.118692   
..               ...               ...            ...              ...   
50           0.23001          0.291437       0.000023         0.118692   
51           0.23001          0.301691       0.000020         0.118692   
52           0.23001          0.311315       0.000017         0.118692   
53           0.23001          0.313856       0.000014         0.118692   
54           0.23001          0.322171       0.000011         0.118692   
55           0.23001          0.343210       0.000010         0.118692   
56           0.23001          0.360453       0.000008         0.118692   
57           0.23001          0.352818       0.000005         0.118692   
58           0.23001          0.361058       0.000003         0.118692   
59           0.23001          0.362624       0.000002         0.118692   
60           0.23001          0.232415       0.000009         0.118692   
61           0.23001          0.232588       0.000017         0.118692   
62           0.23001          0.236427       0.000031         0.118692   
63           0.23001          0.241038       0.000037         0.118692   
64           0.23001          0.243177       0.000033         0.118692   
65           0.23001          0.242658       0.000027         0.118692   
66           0.23001          0.246455       0.000029         0.118692   
67           0.23001          0.246647       0.000023         0.118692   
68           0.23001          0.253509       0.000024         0.118692   
69           0.23001          0.266624       0.000021         0.118692   
70           0.23001          0.291437       0.000023         0.118692   
71           0.23001          0.301691       0.000020         0.118692   
72           0.23001          0.311315       0.000017         0.118692   
73           0.23001          0.313856       0.000014         0.118692   
74           0.23001          0.322171       0.000011         0.118692   
75           0.23001          0.343210       0.000010         0.118692   
76           0.23001          0.360453       0.000008         0.118692   
77           0.23001          0.352818       0.000005         0.118692   
78           0.23001          0.361058       0.000003         0.118692   
79           0.23001          0.362624       0.000002         0.118692   

    Test_best_model  l1weight  tempalpha   Tier                       datafile  
0          0.124464       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
1          0.122845       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
2          0.135947       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
3          0.141388       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
4          0.146849       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
5          0.147139       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
6          0.156848       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
7          0.155471       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
8          0.163486       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
9          0.182181       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
10         0.221274       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
11         0.233316       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
12         0.233093       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
13         0.241143       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
14         0.264164       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
15         0.283059       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
16         0.315101       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
17         0.306749       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
18         0.318931       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
19         0.328600       0.1        0.1  tier0      test_h2_inf_betas_1.0.txt  
20         0.124464       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
21         0.122845       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
22         0.135947       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
23         0.141388       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
24         0.146849       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
25         0.147139       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
26         0.156848       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
27         0.155471       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
28         0.163486       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
29         0.182181       0.1        0.1  tier0  test_h2_non_inf_betas_1.0.txt  
..              ...       ...        ...    ...                            ...  
50         0.221274       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
51         0.233316       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
52         0.233093       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
53         0.241143       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
54         0.264164       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
55         0.283059       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
56         0.315101       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
57         0.306749       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
58         0.318931       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
59         0.328600       0.1        0.1  tier0      test_pT_inf_betas_1.0.txt  
60         0.124464       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
61         0.122845       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
62         0.135947       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
63         0.141388       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
64         0.146849       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
65         0.147139       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
66         0.156848       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
67         0.155471       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
68         0.163486       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
69         0.182181       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
70         0.221274       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
71         0.233316       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
72         0.233093       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
73         0.241143       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
74         0.264164       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
75         0.283059       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
76         0.315101       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
77         0.306749       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
78         0.318931       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  
79         0.328600       0.1        0.1  tier0  test_pT_non_inf_betas_1.0.txt  

[80 rows x 19 columns]

Results#

1. Reporting Based on Best Training Performance:#

  • One can report the results based on the best performance of the training data. For example, if for a specific combination of hyperparameters, the training performance is high, report the corresponding test performance.

  • Example code:

    df = divided_result.sort_values(by='Train_best_model', ascending=False)
    print(df.iloc[0].to_markdown())
    

Binary Phenotypes Result Analysis#

You can find the performance quality for binary phenotype using the following template:

PerformanceBinary

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.5

Moderate Performance

0.6 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most phenotypes, results typically fall in the moderate train and moderate test performance category.

Continuous Phenotypes Result Analysis#

You can find the performance quality for continuous phenotypes using the following template:

PerformanceContinous

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.2

Moderate Performance

0.3 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most continuous phenotypes, results typically fall in the moderate train and moderate test performance category.

2. Reporting Generalized Performance:#

  • One can also report the generalized performance by calculating the difference between the training and test performance, and the sum of the test and training performance. Report the result or hyperparameter combination for which the sum is high and the difference is minimal.

  • Example code:

    df = divided_result.copy()
    df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
    df['Sum'] = df['Train_best_model'] + df['Test_best_model']
    
    sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
    print(sorted_df.iloc[0].to_markdown())
    

3. Reporting Hyperparameters Affecting Test and Train Performance:#

  • Find the hyperparameters that have more than one unique value and calculate their correlation with the following columns to understand how they are affecting the performance of train and test sets:

    • Train_null_model

    • Train_pure_prs

    • Train_best_model

    • Test_pure_prs

    • Test_null_model

    • Test_best_model

4. Other Analysis#

  1. Once you have the results, you can find how hyperparameters affect the model performance.

  2. Analysis, like overfitting and underfitting, can be performed as well.

  3. The way you are going to report the results can vary.

  4. Results can be visualized, and other patterns in the data can be explored.

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib notebook

import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# In Python 2, use 'plt.ion()' to enable interactive mode
plt.ion()

df = divided_result.sort_values(by='Train_best_model', ascending=False)
print("1. Reporting Based on Best Training Performance:\n")
print(df.iloc[0])

df = divided_result.copy()

# Plot Train and Test best models against p-values
plt.figure(figsize=(10, 6))
plt.plot(df['pvalue'], df['Train_best_model'], label='Train_best_model', marker='o', color='royalblue')
plt.plot(df['pvalue'], df['Test_best_model'], label='Test_best_model', marker='o', color='darkorange')

# Highlight the p-value where both train and test are high
best_index = df[['Train_best_model']].sum(axis=1).idxmax()
best_pvalue = df.loc[best_index, 'pvalue']
best_train = df.loc[best_index, 'Train_best_model']
best_test = df.loc[best_index, 'Test_best_model']

# Use dark colors for the circles
plt.scatter(best_pvalue, best_train, color='darkred', s=100, label='Best Performance (Train)', edgecolor='black', zorder=5)
plt.scatter(best_pvalue, best_test, color='darkblue', s=100, label='Best Performance (Test)', edgecolor='black', zorder=5)

# Annotate the best performance with p-value, train, and test values
plt.text(best_pvalue, best_train, 'p=%0.4g\nTrain=%0.4g' % (best_pvalue, best_train), ha='right', va='bottom', fontsize=9, color='darkred')
plt.text(best_pvalue, best_test, 'p=%0.4g\nTest=%0.4g' % (best_pvalue, best_test), ha='right', va='top', fontsize=9, color='darkblue')

# Calculate Difference and Sum
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']

# Sort the DataFrame
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])

# Highlight the general performance
general_index = sorted_df.index[0]
general_pvalue = sorted_df.loc[general_index, 'pvalue']
general_train = sorted_df.loc[general_index, 'Train_best_model']
general_test = sorted_df.loc[general_index, 'Test_best_model']

plt.scatter(general_pvalue, general_train, color='darkgreen', s=150, label='General Performance (Train)', edgecolor='black', zorder=6)
plt.scatter(general_pvalue, general_test, color='darkorange', s=150, label='General Performance (Test)', edgecolor='black', zorder=6)

# Annotate the general performance with p-value, train, and test values
plt.text(general_pvalue, general_train, 'p=%0.4g\nTrain=%0.4g' % (general_pvalue, general_train), ha='right', va='bottom', fontsize=9, color='darkgreen')
plt.text(general_pvalue, general_test, 'p=%0.4g\nTest=%0.4g' % (general_pvalue, general_test), ha='right', va='top', fontsize=9, color='darkorange')

# Add labels and legend
plt.xlabel('p-value')
plt.ylabel('Model Performance')
plt.title('Train vs Test Best Models')
plt.legend()
plt.show()

print("2. Reporting Generalized Performance:\n")
df = divided_result.copy()
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
print(sorted_df.iloc[0])

print("3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':\n")

print("3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.")

print("3. We performed this analysis for those hyperparameters that have more than one unique value.")

correlation_columns = [
    'Train_null_model', 'Train_pure_prs', 'Train_best_model',
    'Test_pure_prs', 'Test_null_model', 'Test_best_model'
]

hyperparams = [col for col in divided_result.columns if len(divided_result[col].unique()) > 1]
hyperparams = list(set(hyperparams + correlation_columns))

# Separate numeric and string columns
numeric_hyperparams = [col for col in hyperparams if pd.api.types.is_numeric_dtype(divided_result[col])]
string_hyperparams = [col for col in hyperparams if pd.api.types.is_string_dtype(divided_result[col])]

# Encode string columns using one-hot encoding
divided_result_encoded = pd.get_dummies(divided_result, columns=string_hyperparams)

# Combine numeric hyperparams with the new one-hot encoded columns
encoded_columns = [col for col in divided_result_encoded.columns if col.startswith(tuple(string_hyperparams))]
hyperparams = numeric_hyperparams + encoded_columns

# Calculate correlations
correlations = divided_result_encoded[hyperparams].corr()

# Display correlation of hyperparameters with train/test performance columns
hyperparam_correlations = correlations.loc[hyperparams, correlation_columns]
hyperparam_correlations = hyperparam_correlations.fillna(0)

# Plotting the correlation heatmap
plt.figure(figsize=(12, 8))
ax = sns.heatmap(hyperparam_correlations, annot=True, cmap='viridis', fmt='.2f', cbar=True)
ax.set_xticklabels(ax.get_xticklabels(), rotation=90, ha='right')

# Rotate y-axis labels to horizontal
#ax.set_yticklabels(ax.get_yticklabels(), rotation=0, va='center')

plt.title('Correlation of Hyperparameters with Train/Test Performance')
plt.show()

sns.set_style("whitegrid")  # Choose your preferred style
pairplot = sns.pairplot(divided_result_encoded[hyperparams], hue='Test_best_model', palette='viridis')

# Adjust the figure size
pairplot.fig.set_size_inches(15, 15)  # You can adjust the size as needed

for ax in pairplot.axes.flatten():
    ax.set_xlabel(ax.get_xlabel(), rotation=90, ha='right')  # X-axis labels vertical
    #ax.set_ylabel(ax.get_ylabel(), rotation=0, va='bottom')  # Y-axis labels horizontal

# Show the plot
plt.show()
1. Reporting Based on Best Training Performance:

clump_p1                                        1
clump_r2                                      0.1
clump_kb                                      200
p_window_size                                 200
p_slide_size                                   50
p_LD_threshold                               0.25
pvalue                                          1
pvalue_AnnoPred                                 1
numberofpca                                     6
Train_pure_prs                        1.47507e-06
Train_null_model                          0.23001
Train_best_model                         0.362624
Test_pure_prs                         1.81997e-06
Test_null_model                          0.118692
Test_best_model                            0.3286
l1weight                                      0.1
tempalpha                                     0.1
Tier                                        tier0
datafile            test_pT_non_inf_betas_1.0.txt
Name: 79, dtype: object
2. Reporting Generalized Performance:

clump_p1                                    1
clump_r2                                  0.1
clump_kb                                  200
p_window_size                             200
p_slide_size                               50
p_LD_threshold                           0.25
pvalue                                      1
pvalue_AnnoPred                             1
numberofpca                                 6
Train_pure_prs                    1.47507e-06
Train_null_model                      0.23001
Train_best_model                     0.362624
Test_pure_prs                     1.81997e-06
Test_null_model                      0.118692
Test_best_model                        0.3286
l1weight                                  0.1
tempalpha                                 0.1
Tier                                    tier0
datafile            test_h2_inf_betas_1.0.txt
Difference                          0.0340243
Sum                                  0.691225
Name: 19, dtype: object
3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':

3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.
3. We performed this analysis for those hyperparameters that have more than one unique value.