BOLT-LMM#

In this notebook, we will use BOLT to calculate the Polygenic Risk Score (PRS).

Installation#

Note: BOLT needs to be installed or placed in the same directory as this notebook.

  1. Download BOLT and extract the files:

    wget https://storage.googleapis.com/broad-alkesgroup-public/BOLT-LMM/downloads/BOLT-LMM_v2.4.1.tar.gz
    tar -xvf BOLT-LMM_v2.4.1.tar.gz
    
  2. Copy all BOLT files to the current working directory:

    cd BOLT-LMM_v2.4.1/
    cp -r * ../
    

Documentation: Available at BOLT-LMM Manual

GWAS for BOLT#

BOLT does not accept the GWAS file directly. Instead, it computes the GWAS file using the genotype data.

Note: When using BOLT, make sure the covariate files use a specific kind of prefix. If you do not specify a covariate file, the code still works; you can comment out the covariate line. We renamed the covariate and made them consistent with the prefix, as required by BOLT.

Another important point is that BOLT does not use the GWAS summary file. Instead, it uses the individual genotype data to estimate effect sizes. These effect sizes are then used to calculate PRS using Plink for both training and test data.

GWAS file processing for BOLT for Binary Phenotypes#

When the effect size relates to disease risk and is thus given as an odds ratio (OR) rather than BETA (for continuous traits), the PRS is computed as a product of ORs. To simplify this calculation, take the natural logarithm of the OR so that the PRS can be computed using summation instead.

Sample#

When executing BOLT, specify the path to the conda environment:

plink_command = [
    './bolt',
    '--bfile=' + traindirec + os.sep + newtrainfilename + ".clumped.pruned",
    '--phenoFile=' + traindirec + os.sep + trainfilename + ".PHENO",
    '--phenoCol=PHENO',
    mm,
    '--LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz',
    '--covarFile=' + traindirec + os.sep + trainfilename + ".COV_PCA",
    #'--covarCol=' + "Sex",
    '--qCovarCol=COV_{1:' + str(len(columns) - 2) + '}',
    #'--statsFile=' + filedirec + os.sep + filedirec2 + "." + mm.replace("-", ""),
    '--statsFile=' + traindirec + os.sep + filedirec + "." + mm.replace("-", "") + "_stat",
    #'--predBetasFile=' + filedirec + os.sep + filedirec + "." + mm.replace("-", "") + "_pred"
]

print(" ".join(plink_command))

os.system("LD_LIBRARY_PATH=/data/ascher01/uqmmune1/miniconda3/envs/genetics/lib/ " + " ".join(plink_command))

Possible error#

ERROR: Heritability estimate is close to 0; LMM may not correct confounding
       Instead, use PC-corrected linear/logistic regression on unrelateds

Possible solution#

Include more samples and remove related individuals using Plink.

print("GWAS file is not required when using BOLt! It generates it's own GWAS file include BETA estimates!")
GWAS file is not required when using BOLt! It generates it's own GWAS file include BETA estimates!

Define Hyperparameters#

Define hyperparameters to be optimized and set initial values.

Extract Valid SNPs from Clumped File#

For Windows, download gwak, and for Linux, the awk command is sufficient. For Windows, GWAK is required. You can download it from here. Get it and place it in the same directory.

Execution Path#

At this stage, we have the genotype training data newtrainfilename = "train_data.QC" and genotype test data newtestfilename = "test_data.QC".

We modified the following variables:

  1. filedirec = "SampleData1" or filedirec = sys.argv[1]

  2. foldnumber = "0" or foldnumber = sys.argv[2] for HPC.

Only these two variables can be modified to execute the code for specific data and specific folds. Though the code can be executed separately for each fold on HPC and separately for each dataset, it is recommended to execute it for multiple diseases and one fold at a time. Here’s the corrected text in Markdown format:

P-values#

PRS calculation relies on P-values. SNPs with low P-values, indicating a high degree of association with a specific trait, are considered for calculation.

You can modify the code below to consider a specific set of P-values and save the file in the same format.

We considered the following parameters:

  • Minimum P-value: 1e-10

  • Maximum P-value: 1.0

  • Minimum exponent: 10 (Minimum P-value in exponent)

  • Number of intervals: 100 (Number of intervals to be considered)

The code generates an array of logarithmically spaced P-values:

import numpy as np
import os

minimumpvalue = 10  # Minimum exponent for P-values
numberofintervals = 100  # Number of intervals to be considered

allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced P-values

print("Minimum P-value:", allpvalues[0])
print("Maximum P-value:", allpvalues[-1])

count = 1
with open(os.path.join(folddirec, 'range_list'), 'w') as file:
    for value in allpvalues:
        file.write(f'pv_{value} 0 {value}\n')  # Writing range information to the 'range_list' file
        count += 1

pvaluefile = os.path.join(folddirec, 'range_list')

In this code:

  • minimumpvalue defines the minimum exponent for P-values.

  • numberofintervals specifies how many intervals to consider.

  • allpvalues generates an array of P-values spaced logarithmically.

  • The script writes these P-values to a file named range_list in the specified directory.

from operator import index
import pandas as pd
import numpy as np
import os
import subprocess
import sys
import pandas as pd
import statsmodels.api as sm
import pandas as pd
from sklearn.metrics import roc_auc_score, confusion_matrix
from statsmodels.stats.contingency_tables import mcnemar
def check_phenotype_is_binary_or_continous(filedirec):
    # Read the processed quality controlled file for a phenotype
    df = pd.read_csv(filedirec+os.sep+filedirec+'_QC.fam',sep="\s+",header=None)
    column_values = df[5].unique()
 
    if len(set(column_values)) == 2:
        return "Binary"
    else:
        return "Continous"
def create_directory(directory):
    """Function to create a directory if it doesn't exist."""
    if not os.path.exists(directory):  # Checking if the directory doesn't exist
        os.makedirs(directory)  # Creating the directory if it doesn't exist
    return directory  # Returning the created or existing directory

#filedirec = sys.argv[1]
filedirec = "SampleData1"

#foldnumber = sys.argv[2]
foldnumber = "0"  # Setting 'foldnumber' to "0"

folddirec = filedirec + os.sep + "Fold_" + foldnumber  # Creating a directory path for the specific fold
trainfilename = "train_data"  # Setting the name of the training data file
newtrainfilename = "train_data.QC"  # Setting the name of the new training data file

testfilename = "test_data"  # Setting the name of the test data file
newtestfilename = "test_data.QC"  # Setting the name of the new test data file

# Number of PCA to be included as a covariate.
numberofpca = ["6"]  # Setting the number of PCA components to be included

# Clumping parameters.
clump_p1 = [1]  # List containing clump parameter 'p1'
clump_r2 = [0.1]  # List containing clump parameter 'r2'
clump_kb = [200]  # List containing clump parameter 'kb'

# Pruning parameters.
p_window_size = [200]  # List containing pruning parameter 'window_size'
p_slide_size = [50]  # List containing pruning parameter 'slide_size'
p_LD_threshold = [0.25]  # List containing pruning parameter 'LD_threshold'

# Kindly note that the number of p-values to be considered varies, and the actual p-value depends on the dataset as well.
# We will specify the range list here.

minimumpvalue = 10  # Minimum p-value in exponent
numberofintervals = 20  # Number of intervals to be considered
allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced p-values



count = 1
with open(folddirec + os.sep + 'range_list', 'w') as file:
    for value in allpvalues:
        file.write(f'pv_{value} 0 {value}\n')  # Writing range information to the 'range_list' file
        count = count + 1

pvaluefile = folddirec + os.sep + 'range_list'

# Initializing an empty DataFrame with specified column names
prs_result = pd.DataFrame(columns=["clump_p1", "clump_r2", "clump_kb", "p_window_size", "p_slide_size", "p_LD_threshold",
                                   "pvalue", "numberofpca","numberofvariants","Train_pure_prs", "Train_null_model", "Train_best_model",
                                   "Test_pure_prs", "Test_null_model", "Test_best_model"])

Define Helper Functions#

  1. Perform Clumping and Pruning

  2. Calculate PCA Using Plink

  3. Fit Binary Phenotype and Save Results

  4. Fit Continuous Phenotype and Save Results

import os
import subprocess
import pandas as pd
import statsmodels.api as sm
from sklearn.metrics import explained_variance_score


def perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,numberofpca, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    
    command = [
    "./plink",
    "--maf", "0.0001",

    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.run(command)
  
    # First perform pruning and then clumping and the pruning.

    command = [
    "./plink",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--clump-p1", c1_val,
    "--extract", traindirec+os.sep+trainfilename+".prune.in",
    "--clump-r2", c2_val,
    "--clump-kb", c3_val,
    "--clump", filedirec+os.sep+filedirec+".txt",
    "--clump-snp-field", "SNP",
    "--clump-field", "P",
    "--out", traindirec+os.sep+trainfilename
    ]    
    subprocess.run(command)

    # Extract the valid SNPs from th clumped file.
    # For windows download gwak for linux awk commmand is sufficient.
    ### For windows require GWAK.
    ### https://sourceforge.net/projects/gnuwin32/
    ##3 Get it and place it in the same direc.
    #os.system("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")
    #print("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")

    #Linux:
    command = f"awk 'NR!=1{{print $3}}' {traindirec}{os.sep}{trainfilename}.clumped > {traindirec}{os.sep}{trainfilename}.valid.snp"
    os.system(command)
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+newtrainfilename+".clumped.pruned"
    ]
    subprocess.run(command)
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+testfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+testfilename+".clumped.pruned"
    ]
    subprocess.run(command)    
    
    
 
def calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p):
    
    # Calculate the PRS for the test data using the same set of SNPs and also calculate the PCA.


    # Also extract the PCA at this point.
    # PCA are calculated afer clumping and pruining.
    command = [
        "./plink",
        "--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", folddirec+os.sep+testfilename
    ]
    subprocess.run(command)


    command = [
    "./plink",
        "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.        
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.run(command)

# This function fit the binary model on the PRS.
def fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,mm, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues

    # Merge the covariates, pca and phenotypes.
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)


    ## Scale the covariates!
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.metrics import explained_variance_score
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
    #covandpcs_train.iloc[:, 2:] = normalized_values_test 
    
    
    tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_test= pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test  = scaler.transform(covandpcs_test.iloc[:, 2:])
    #covandpcs_test.iloc[:, 2:] = normalized_values_test     
    
    
    
    
    tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
    l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1}) 
    phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
      
    for tempalpha in tempalphas:
        for l1weight in l1weights:

            
            try:
                null_model =  sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                #null_model =  sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
            
            except:
                print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
            
            from sklearn.metrics import roc_auc_score, confusion_matrix
            from sklearn.metrics import r2_score
            
            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
            
           
            
            global prs_result 
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
        
                try:
                    model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                    #model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
                
                except:
                    continue


                
                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))    
 

                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:])) 
 
        
                from sklearn.metrics import roc_auc_score, confusion_matrix

                prs_result = prs_result._append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca":p, 

                    "tempalpha":str(tempalpha),
                    "l1weight":str(l1weight),
                    "numberofvariants": len(pd.read_csv(traindirec+os.sep+newtrainfilename+".clumped.pruned.bim")),
                    
                     "BOLTmodel":mm,

                    "Train_pure_prs":roc_auc_score(phenotype_train["Phenotype"].values,prs_train['SCORE'].values),
                    "Train_null_model":roc_auc_score(phenotype_train["Phenotype"].values,train_null_predicted.values),
                    "Train_best_model":roc_auc_score(phenotype_train["Phenotype"].values,train_best_predicted.values),
                    
                    "Test_pure_prs":roc_auc_score(phenotype_test["Phenotype"].values,prs_test['SCORE'].values),
                    "Test_null_model":roc_auc_score(phenotype_test["Phenotype"].values,test_null_predicted.values),
                    "Test_best_model":roc_auc_score(phenotype_test["Phenotype"].values,test_best_predicted.values),
                    
                }, ignore_index=True)

          
                prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
     
    return

# This function fit the binary model on the PRS.
def fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p,mm, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues

    # Merge the covariates, pca and phenotypes.
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)


    ## Scale the covariates!
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.metrics import explained_variance_score
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
    #covandpcs_train.iloc[:, 2:] = normalized_values_test 
    
    tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_test= pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test  = scaler.transform(covandpcs_test.iloc[:, 2:])
    #covandpcs_test.iloc[:, 2:] = normalized_values_test     
    
    
    
    
    tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
    l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    #phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1}) 
    #phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
      
    for tempalpha in tempalphas:
        for l1weight in l1weights:

            
            try:
                #null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
                #null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
            except:
                print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
            
            from sklearn.metrics import roc_auc_score, confusion_matrix
            from sklearn.metrics import r2_score
            
            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
            
            
            
            global prs_result 
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
        
                try:
                    #model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                    model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
                
                except:
                    continue


                
                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))    
                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:])) 
 
        
                from sklearn.metrics import roc_auc_score, confusion_matrix

                prs_result = prs_result._append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca":p, 

                    "tempalpha":str(tempalpha),
                    "l1weight":str(l1weight),
                     
                    
                    "BOLTmodel":mm,
                    "Train_pure_prs":explained_variance_score(phenotype_train["Phenotype"],prs_train['SCORE'].values),
                    "Train_null_model":explained_variance_score(phenotype_train["Phenotype"],train_null_predicted),
                    "Train_best_model":explained_variance_score(phenotype_train["Phenotype"],train_best_predicted),
                    
                    "Test_pure_prs":explained_variance_score(phenotype_test["Phenotype"],prs_test['SCORE'].values),
                    "Test_null_model":explained_variance_score(phenotype_test["Phenotype"],test_null_predicted),
                    "Test_best_model":explained_variance_score(phenotype_test["Phenotype"],test_best_predicted),
                    
                }, ignore_index=True)

          
                prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
     
    return

Execute BOLT-LMM#

def transform_bolt_lmm_data(traindirec, newtrainfilename,mm,numberofpca, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    ### First perform clumping on the file and save the clumpled file.
    #perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,p, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
    
    #newtrainfilename = newtrainfilename+".clumped.pruned"
    #testfilename = testfilename+".clumped.pruned"
    
    
    #clupmedfile = traindirec+os.sep+newtrainfilename+".clump"
    #prunedfile = traindirec+os.sep+newtrainfilename+".clumped.pruned"

        
    # Also extract the PCA at this point for both test and training data.
    #calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p)

    #Extract p-values from the GWAS file.
    # Command for Linux.
    #os.system("awk "+"\'"+"{print $3,$8}"+"\'"+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")

    # Command for windows.
    ### For windows get GWAK.
    ### https://sourceforge.net/projects/gnuwin32/
    ##3 Get it and place it in the same direc.
    #os.system("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")
    #print("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")

    #exit(0)
    # Delete files generated in the previous iteration.
    files_to_remove = [
        traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat",
    ]

    # Loop through the files and directories and remove them if they exist
    for file_path in files_to_remove:
        if os.path.exists(file_path):
            if os.path.isfile(file_path):
                os.remove(file_path)
                print(f"Removed file: {file_path}")
            elif os.path.isdir(file_path):
                shutil.rmtree(file_path)
                print(f"Removed directory: {file_path}")
        else:
            print(f"File or directory does not exist: {file_path}")
            
    
    
    
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype = pd.DataFrame()
    phenotype = tempphenotype_train[[0,1,5]]
    phenotype.to_csv(traindirec+os.sep+trainfilename+".PHENO",sep="\t",header=['FID', 'IID', 'PHENO'],index=False)
 
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    print(covariate_train.head())
    print(len(covariate_train))
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    print(len(covariate_train))
 
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)    

    print(covandpcs_train)
    
    # Here ensure that the first and second column is FID and IID. For Bolt we need the covariates to start from the 
    # specific prefix like COV_X
    original_columns = covandpcs_train.columns

    # Rename columns other than 'FID' and 'IID'
    new_columns = ['FID', 'IID'] + [f'COV_{i+1}' for i in range(len(original_columns) - 2)]

    # Create a mapping of old column names to new names
    rename_mapping = dict(zip(original_columns, new_columns))

    # Rename the columns
    covandpcs_train.rename(columns=rename_mapping, inplace=True)

    # Reorder columns to ensure 'FID' and 'IID' are first
    columns = ['FID', 'IID'] + [f'COV_{i+1}' for i in range(len(original_columns) - 2)]
    covandpcs_train = covandpcs_train[columns]
    covandpcs_train.to_csv(traindirec+os.sep+trainfilename+".COV_PCA",sep="\t",index=False)    
    
    
    command = [
        
    './bolt',
    '--bfile='+traindirec+os.sep+newtrainfilename+".clumped.pruned",
    '--phenoFile='+traindirec+os.sep+trainfilename+".PHENO" ,
    '--phenoCol=PHENO',
    mm,
    '--LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz',
    '--covarFile='+traindirec+os.sep+trainfilename+".COV_PCA",
    #'--covarCol='+"COV_1",
    # TO include the first covariate which is sex use the following code.
    # Here i assumed that the first covariate is the sex. For our data the first covariate is sex.
    
    #
    #ERROR: Heritability estimate is close to 0; LMM may not correct confounding
    #   Instead, use PC-corrected linear/logistic regression on unrelateds
    #ERROR: Heritability estimate is close to 1; LMM may not correct confounding
    #   Instead, use PC-corrected linear/logistic regression on unrelateds
    
    #'--qCovarCol=COV_{1:'+str(len(columns)-len(columns)+1)+'}',
    
    # To include all the covariate use the following code, but not that it may crash the code as the heritability
    # from the geneotype data may reach to 0 and the BOLT-LMM may not work.
    # If heriability is close 0 or close to 1 the BOLT-LMM may not work.
    '--qCovarCol=COV_{1:'+str(len(columns)-4)+'}',
        
    
    #'--statsFile='+filedirec+os.sep+filedirec2+"."+mm.replace("-","")
    '--statsFile='+traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat",
    #'--predBetasFile='+filedirec+os.sep+filedirec+"."+mm.replace("-","")+"_pred"
    ]
    print(" ".join(command))
    

    os.system("LD_LIBRARY_PATH=/data/ascher01/uqmmune1/miniconda3/envs/genetics/lib/ "+" ".join(command))
    
    #return
    gwas = pd.read_csv(traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat",sep="\s+")
    print(gwas.head())
    
    if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
        gwas["BETA"] = np.exp(gwas["BETA"])
    else:
        pass

    gwas.iloc[:,[0,4,8]].to_csv(traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat",sep="\t",index=False)
     
    
    command = [
        "./plink",
        "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
        "--score", traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat", "1", "2", "3", "header",
        "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--out", traindirec+os.sep+Name+os.sep+trainfilename
    ]
    subprocess.run(command)


    command = [
        "./plink",
        "--bfile", traindirec+os.sep+testfilename+".clumped.pruned",
        ### SNP column = 3, Effect allele column 1 = 4, Beta column=12
        "--score", traindirec+os.sep+filedirec+"."+mm.replace("-","")+"_stat", "1", "2", "3", "header",
        "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--out", folddirec+os.sep+Name+os.sep+testfilename
    ]
    subprocess.run(command)
    
    
    if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
        print("Binary Phenotype!")
        fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,mm.replace("-",""), p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
    else:
        print("Continous Phenotype!")
        fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p,mm.replace("-",""), p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
            
 

 
 
 
result_directory = "BOLT-LMM"
models = ["--lmm","--lmmInfOnly","--lmmForceNonInf"]
# Nested loops to iterate over different parameter values
create_directory(folddirec+os.sep+result_directory)
for p1_val in p_window_size:
 for p2_val in p_slide_size: 
  for p3_val in p_LD_threshold:
   for c1_val in clump_p1:
    for c2_val in clump_r2:
     for c3_val in clump_kb:
      for p in numberofpca:
       for mm in models: 
        transform_bolt_lmm_data(folddirec, newtrainfilename,mm, p, str(p1_val), str(p2_val), str(p3_val), str(c1_val), str(c2_val), str(c3_val), result_directory, pvaluefile)

 
Removed file: SampleData1/Fold_0/SampleData1.lmm_stat
       FID      IID  Sex
0  HG00097  HG00097    2
1  HG00099  HG00099    2
2  HG00101  HG00101    1
3  HG00102  HG00102    2
4  HG00103  HG00103    1
380
380
         FID      IID  Sex       PC1       PC2       PC3       PC4       PC5  \
0    HG00097  HG00097    2 -0.001453  0.084820  0.006792  0.013653  0.027149   
1    HG00099  HG00099    2 -0.002017  0.089514 -0.022355  0.001888 -0.000037   
2    HG00101  HG00101    1 -0.000380  0.096056 -0.018231 -0.016026  0.012093   
3    HG00102  HG00102    2  0.000292  0.071832  0.018087 -0.045180  0.028123   
4    HG00103  HG00103    1 -0.008372  0.065005 -0.009089 -0.026468 -0.009184   
..       ...      ...  ...       ...       ...       ...       ...       ...   
375  NA20818  NA20818    2 -0.047156 -0.040644 -0.052693  0.021050 -0.013389   
376  NA20826  NA20826    2 -0.042629 -0.059404 -0.066130  0.006495 -0.009525   
377  NA20827  NA20827    1 -0.044060 -0.053125 -0.065463  0.015030 -0.004314   
378  NA20828  NA20828    2 -0.047621 -0.050577 -0.043164  0.003004 -0.016823   
379  NA20832  NA20832    2 -0.041535 -0.049826 -0.047877  0.005951 -0.003770   

          PC6  
0    0.032581  
1    0.009107  
2    0.019296  
3   -0.003620  
4   -0.030565  
..        ...  
375 -0.047403  
376  0.010779  
377  0.003873  
378  0.015832  
379 -0.023086  

[380 rows x 9 columns]
./bolt --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned --phenoFile=SampleData1/Fold_0/train_data.PHENO --phenoCol=PHENO --lmm --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz --covarFile=SampleData1/Fold_0/train_data.COV_PCA --qCovarCol=COV_{1:2} --statsFile=SampleData1/Fold_0/SampleData1.lmm_stat
                      +-----------------------------+
                      |                       ___   |
                      |   BOLT-LMM, v2.4.1   /_ /   |
                      |   November 16, 2022   /_/   |
                      |   Po-Ru Loh            //   |
                      |                        /    |
                      +-----------------------------+

Copyright (C) 2014-2022 Harvard University.
Distributed under the GNU GPLv3 open source license.

Compiled with USE_SSE: fast aligned memory access
Compiled with USE_MKL: Intel Math Kernel Library linear algebra
Boost version: 1_58

Command line options:

./bolt \
    --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned \
    --phenoFile=SampleData1/Fold_0/train_data.PHENO \
    --phenoCol=PHENO \
    --lmm \
    --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz \
    --covarFile=SampleData1/Fold_0/train_data.COV_PCA \
    --qCovarCol=COV_{1:2} \
    --statsFile=SampleData1/Fold_0/SampleData1.lmm_stat 

Setting number of threads to 1
fam: SampleData1/Fold_0/train_data.QC.clumped.pruned.fam
bim(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
bed(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bed

=== Reading genotype data ===

Total indivs in PLINK data: Nbed = 380
Total indivs stored in memory: N = 380
Reading bim file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
    Read 172878 snps
Total snps in PLINK data: Mbed = 172878

Breakdown of SNP pre-filtering results:
  172878 SNPs to include in model (i.e., GRM)
  0 additional non-GRM SNPs loaded
  0 excluded SNPs
Allocating 172878 x 380/4 bytes to store genotypes
Reading genotypes and performing QC filtering on snps and indivs...
Reading bed file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bed
    Expecting 16423410 (+3) bytes for 380 indivs, 172878 snps
WARNING: Genetic map appears to be in cM units; rescaling by 0.01
Total indivs after QC: 380
Total post-QC SNPs: M = 172878
  Variance component 1: 172878 post-QC SNPs (name: 'modelSnps')
Time for SnpData setup = 1.51395 sec

=== Reading phenotype and covariate data ===

Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.COV_PCA
Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.PHENO
Number of indivs with no missing phenotype(s) to use: 380
    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 380
Singular values of covariate matrix:
    S[0] = 36.3836
    S[1] = 5.21943
    S[2] = 0.995369
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           375.206815
Dimension of all-1s proj space (Nused-1): 379
Time for covariate data setup + Bolt initialization = 0.778457 sec

Phenotype 1:   N = 380   mean = 170.14   std = 0.945865

=== Computing linear regression (LINREG) stats ===

Time for computing LINREG stats = 0.201925 sec

=== Estimating variance parameters ===

Using CGtol of 0.005 for this step
Using default number of random trials: 15 (for Nused = 380)

Estimating MC scaling f_REML at log(delta) = 1.09384, h2 = 0.25...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.43  rNorms/orig: (0.02,0.02)  res2s: 882.719..149.989
  iter 2:  time=0.42  rNorms/orig: (0.0005,0.001)  res2s: 883.769..150.228
  Converged at iter 2: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 38.6%, memory/overhead = 61.4%
  MCscaling: logDelta = 1.09, h2 = 0.250, f = 0.00255218

Estimating MC scaling f_REML at log(delta) = -0.00476781, h2 = 0.5...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.42  rNorms/orig: (0.04,0.05)  res2s: 206.958..66.4433
  iter 2:  time=0.42  rNorms/orig: (0.002,0.005)  res2s: 207.859..66.8373
  iter 3:  time=0.42  rNorms/orig: (9e-05,0.0003)  res2s: 207.871..66.8446
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.0%, memory/overhead = 64.0%
  MCscaling: logDelta = -0.00, h2 = 0.500, f = 0.000475493

Estimating MC scaling f_REML at log(delta) = -0.256314, h2 = 0.562557...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.41  rNorms/orig: (0.04,0.05)  res2s: 142.182..50.8157
  iter 2:  time=0.37  rNorms/orig: (0.002,0.006)  res2s: 142.949..51.1908
  iter 3:  time=0.37  rNorms/orig: (0.0001,0.0005)  res2s: 142.962..51.1995
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.1%, memory/overhead = 63.9%
  MCscaling: logDelta = -0.26, h2 = 0.563, f = 5.80701e-05

Estimating MC scaling f_REML at log(delta) = -0.291308, h2 = 0.571149...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.37  rNorms/orig: (0.04,0.05)  res2s: 134.763..48.8336
  iter 2:  time=0.38  rNorms/orig: (0.002,0.007)  res2s: 135.51..49.2044
  iter 3:  time=0.38  rNorms/orig: (0.0001,0.0005)  res2s: 135.523..49.2132
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.2%, memory/overhead = 63.8%
  MCscaling: logDelta = -0.29, h2 = 0.571, f = 3.23365e-06

Secant iteration for h2 estimation converged in 2 steps
Estimated (pseudo-)heritability: h2g = 0.571
To more precisely estimate variance parameters and estimate s.e., use --reml
Variance params: sigma^2_K = 0.406791, logDelta = -0.291308, f = 3.23365e-06

Time for fitting variance components = 5.53623 sec

=== Computing mixed model assoc stats (inf. model) ===

Selected 30 SNPs for computation of prospective stat
Tried 30; threw out 0 with GRAMMAR chisq > 5
Assigning SNPs to 22 chunks for leave-out analysis
Each chunk is excluded when testing SNPs belonging to the chunk
  Batch-solving 52 systems of equations using conjugate gradient iteration
  iter 1:  time=0.70  rNorms/orig: (0.04,0.07)  res2s: 48.7902..70.329
  iter 2:  time=0.69  rNorms/orig: (0.002,0.007)  res2s: 49.1781..70.6337
  iter 3:  time=0.68  rNorms/orig: (9e-05,0.0005)  res2s: 49.1875..70.6346
  Converged at iter 3: rNorms/orig all < CGtol=0.0005
  Time breakdown: dgemm = 64.0%, memory/overhead = 36.0%

AvgPro: 0.844   AvgRetro: 0.839   Calibration: 1.006 (0.003)   (30 SNPs)
Ratio of medians: 1.013   Median of ratios: 1.002

Time for computing infinitesimal model assoc stats = 2.26405 sec

=== Estimating chip LD Scores using 400 indivs ===

Reducing sample size to 376 for memory alignment
WARNING: Only 380 indivs available; using all
Time for estimating chip LD Scores = 0.636621 sec

=== Reading LD Scores for calibration of Bayesian assoc stats ===

Looking up LD Scores...
  Looking for column header 'SNP': column number = 1
  Looking for column header 'LDSCORE': column number = 5
Found LD Scores for 171753/172878 SNPs

Estimating inflation of LINREG chisq stats using MLMe as reference...
Filtering to SNPs with chisq stats, LD Scores, and MAF > 0.01
# of SNPs passing filters before outlier removal: 171753/172878
Masking windows around outlier snps (chisq > 20.0)
# of SNPs remaining after outlier window removal: 171753/171753
Intercept of LD Score regression for ref stats:   1.003 (0.005)
Estimated attenuation: 0.409 (0.534)
Intercept of LD Score regression for cur stats: 1.004 (0.005)
Calibration factor (ref/cur) to multiply by:      0.999 (0.000)
LINREG intercept inflation = 1.00087

=== Estimating mixture parameters by cross-validation ===

Setting maximum number of iterations to 250 for this step
Max CV folds to compute = 5 (to have > 10000 samples)

====> Starting CV fold 1 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.5614
    S[1] = 4.66512
    S[2] = 0.894787
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.575111
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.09 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (37.61,49.27)
  iter 3:  time=0.80 for 18 active reps  approxLL diffs: (1.23,1.36)
  iter 4:  time=0.81 for 18 active reps  approxLL diffs: (0.14,0.21)
  iter 5:  time=0.80 for 18 active reps  approxLL diffs: (0.01,0.01)
  iter 6:  time=0.19 for  1 active reps  approxLL diffs: (0.00,0.00)
  Converged at iter 6: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 25.7%, memory/overhead = 74.3%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.452455 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.006720
  f2=0.3, p=0.5: 0.006720
  f2=0.5, p=0.2: 0.006720
            ...
 f2=0.3, p=0.01: 0.006710

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.586308
  Absolute prediction MSE using standard LMM:         0.582368
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.582368
    Absolute pred MSE using   f2=0.5, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.5, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.5, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.5, p=0.05: 0.582368
    Absolute pred MSE using  f2=0.5, p=0.02: 0.582369
    Absolute pred MSE using  f2=0.5, p=0.01: 0.582372
    Absolute pred MSE using   f2=0.3, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.3, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.3, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.3, p=0.05: 0.582369
    Absolute pred MSE using  f2=0.3, p=0.02: 0.582370
    Absolute pred MSE using  f2=0.3, p=0.01: 0.582374
    Absolute pred MSE using   f2=0.1, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.1, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.1, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.1, p=0.05: 0.582369
    Absolute pred MSE using  f2=0.1, p=0.02: 0.582371
    Absolute pred MSE using  f2=0.1, p=0.01: 0.582370

====> End CV fold 1: 18 remaining param pair(s) <====

Estimated proportion of variance explained using inf model: 0.007
Relative improvement in prediction MSE using non-inf model: 0.000

Exiting CV: non-inf model does not substantially improve prediction
Optimal mixture parameters according to CV: f2 = 0.5, p = 0.5
Bayesian non-infinitesimal model does not fit substantially better
=> Not computing non-inf assoc stats (to override, use --lmmForceNonInf)

Time for estimating mixture parameters = 32.8537 sec

Calibration stats: mean and lambdaGC (over SNPs used in GRM)
  (note that both should be >1 because of polygenicity)
Mean BOLT_LMM_INF: 1.0074 (172878 good SNPs)   lambdaGC: 1.01336

=== Streaming genotypes to compute and write assoc stats at all SNPs ===

Time for streaming genotypes and writing output = 1.68225 sec

Total elapsed time for analysis = 45.4673 sec
           SNP  CHR      BP    GENPOS ALLELE1 ALLELE0    A1FREQ  F_MISS  \
0   rs79373928    1  801536  0.587220       G       T  0.014474     0.0   
1    rs4970382    1  840753  0.620827       C       T  0.406579     0.0   
2   rs13303222    1  849998  0.620827       A       G  0.196053     0.0   
3   rs72631889    1  851390  0.620827       T       G  0.034210     0.0   
4  rs192998324    1  862772  0.620827       G       A  0.027632     0.0   

       BETA        SE  P_BOLT_LMM_INF  
0  0.015560  0.258427           0.950  
1 -0.060199  0.059829           0.310  
2 -0.006768  0.078287           0.930  
3  0.315642  0.172246           0.067  
4 -0.227920  0.190562           0.230  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmm_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmm_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/test_data.*.profile.
Continous Phenotype!
/tmp/ipykernel_1257867/115566430.py:347: FutureWarning: The behavior of DataFrame concatenation with empty or all-NA entries is deprecated. In a future version, this will no longer exclude empty or all-NA columns when determining the result dtypes. To retain the old behavior, exclude the relevant entries before the concat operation.
  prs_result = prs_result._append({
Removed file: SampleData1/Fold_0/SampleData1.lmmInfOnly_stat
       FID      IID  Sex
0  HG00097  HG00097    2
1  HG00099  HG00099    2
2  HG00101  HG00101    1
3  HG00102  HG00102    2
4  HG00103  HG00103    1
380
380
         FID      IID  Sex       PC1       PC2       PC3       PC4       PC5  \
0    HG00097  HG00097    2 -0.001453  0.084820  0.006792  0.013653  0.027149   
1    HG00099  HG00099    2 -0.002017  0.089514 -0.022355  0.001888 -0.000037   
2    HG00101  HG00101    1 -0.000380  0.096056 -0.018231 -0.016026  0.012093   
3    HG00102  HG00102    2  0.000292  0.071832  0.018087 -0.045180  0.028123   
4    HG00103  HG00103    1 -0.008372  0.065005 -0.009089 -0.026468 -0.009184   
..       ...      ...  ...       ...       ...       ...       ...       ...   
375  NA20818  NA20818    2 -0.047156 -0.040644 -0.052693  0.021050 -0.013389   
376  NA20826  NA20826    2 -0.042629 -0.059404 -0.066130  0.006495 -0.009525   
377  NA20827  NA20827    1 -0.044060 -0.053125 -0.065463  0.015030 -0.004314   
378  NA20828  NA20828    2 -0.047621 -0.050577 -0.043164  0.003004 -0.016823   
379  NA20832  NA20832    2 -0.041535 -0.049826 -0.047877  0.005951 -0.003770   

          PC6  
0    0.032581  
1    0.009107  
2    0.019296  
3   -0.003620  
4   -0.030565  
..        ...  
375 -0.047403  
376  0.010779  
377  0.003873  
378  0.015832  
379 -0.023086  

[380 rows x 9 columns]
./bolt --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned --phenoFile=SampleData1/Fold_0/train_data.PHENO --phenoCol=PHENO --lmmInfOnly --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz --covarFile=SampleData1/Fold_0/train_data.COV_PCA --qCovarCol=COV_{1:2} --statsFile=SampleData1/Fold_0/SampleData1.lmmInfOnly_stat
                      +-----------------------------+
                      |                       ___   |
                      |   BOLT-LMM, v2.4.1   /_ /   |
                      |   November 16, 2022   /_/   |
                      |   Po-Ru Loh            //   |
                      |                        /    |
                      +-----------------------------+

Copyright (C) 2014-2022 Harvard University.
Distributed under the GNU GPLv3 open source license.

Compiled with USE_SSE: fast aligned memory access
Compiled with USE_MKL: Intel Math Kernel Library linear algebra
Boost version: 1_58

Command line options:

./bolt \
    --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned \
    --phenoFile=SampleData1/Fold_0/train_data.PHENO \
    --phenoCol=PHENO \
    --lmmInfOnly \
    --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz \
    --covarFile=SampleData1/Fold_0/train_data.COV_PCA \
    --qCovarCol=COV_{1:2} \
    --statsFile=SampleData1/Fold_0/SampleData1.lmmInfOnly_stat 

Setting number of threads to 1
fam: SampleData1/Fold_0/train_data.QC.clumped.pruned.fam
bim(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
bed(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bed

=== Reading genotype data ===

Total indivs in PLINK data: Nbed = 380
Total indivs stored in memory: N = 380
Reading bim file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
    Read 172878 snps
Total snps in PLINK data: Mbed = 172878

Breakdown of SNP pre-filtering results:
  172878 SNPs to include in model (i.e., GRM)
  0 additional non-GRM SNPs loaded
  0 excluded SNPs
Allocating 172878 x 380/4 bytes to store genotypes
Reading genotypes and performing QC filtering on snps and indivs...
Reading bed file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bed
    Expecting 16423410 (+3) bytes for 380 indivs, 172878 snps
WARNING: Genetic map appears to be in cM units; rescaling by 0.01
Total indivs after QC: 380
Total post-QC SNPs: M = 172878
  Variance component 1: 172878 post-QC SNPs (name: 'modelSnps')
Time for SnpData setup = 1.19219 sec

=== Reading phenotype and covariate data ===

Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.COV_PCA
Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.PHENO
Number of indivs with no missing phenotype(s) to use: 380
    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 380
Singular values of covariate matrix:
    S[0] = 36.3836
    S[1] = 5.21943
    S[2] = 0.995369
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           375.206815
Dimension of all-1s proj space (Nused-1): 379
Time for covariate data setup + Bolt initialization = 0.375053 sec

Phenotype 1:   N = 380   mean = 170.14   std = 0.945865

=== Computing linear regression (LINREG) stats ===

Time for computing LINREG stats = 0.145117 sec

=== Estimating variance parameters ===

Using CGtol of 0.005 for this step
Using default number of random trials: 15 (for Nused = 380)

Estimating MC scaling f_REML at log(delta) = 1.09384, h2 = 0.25...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.42  rNorms/orig: (0.02,0.02)  res2s: 882.719..149.989
  iter 2:  time=0.40  rNorms/orig: (0.0005,0.001)  res2s: 883.769..150.228
  Converged at iter 2: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.3%, memory/overhead = 63.7%
  MCscaling: logDelta = 1.09, h2 = 0.250, f = 0.00255218

Estimating MC scaling f_REML at log(delta) = -0.00476781, h2 = 0.5...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.37  rNorms/orig: (0.04,0.05)  res2s: 206.958..66.4433
  iter 2:  time=0.37  rNorms/orig: (0.002,0.005)  res2s: 207.859..66.8373
  iter 3:  time=0.38  rNorms/orig: (9e-05,0.0003)  res2s: 207.871..66.8446
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.0%, memory/overhead = 64.0%
  MCscaling: logDelta = -0.00, h2 = 0.500, f = 0.000475493

Estimating MC scaling f_REML at log(delta) = -0.256314, h2 = 0.562557...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.37  rNorms/orig: (0.04,0.05)  res2s: 142.182..50.8157
  iter 2:  time=0.37  rNorms/orig: (0.002,0.006)  res2s: 142.949..51.1908
  iter 3:  time=0.37  rNorms/orig: (0.0001,0.0005)  res2s: 142.962..51.1995
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.1%, memory/overhead = 63.9%
  MCscaling: logDelta = -0.26, h2 = 0.563, f = 5.80701e-05

Estimating MC scaling f_REML at log(delta) = -0.291308, h2 = 0.571149...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.37  rNorms/orig: (0.04,0.05)  res2s: 134.763..48.8336
  iter 2:  time=0.37  rNorms/orig: (0.002,0.007)  res2s: 135.51..49.2044
  iter 3:  time=0.37  rNorms/orig: (0.0001,0.0005)  res2s: 135.523..49.2132
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.1%, memory/overhead = 63.9%
  MCscaling: logDelta = -0.29, h2 = 0.571, f = 3.23365e-06

Secant iteration for h2 estimation converged in 2 steps
Estimated (pseudo-)heritability: h2g = 0.571
To more precisely estimate variance parameters and estimate s.e., use --reml
Variance params: sigma^2_K = 0.406791, logDelta = -0.291308, f = 3.23365e-06

Time for fitting variance components = 5.17613 sec

=== Computing mixed model assoc stats (inf. model) ===

Selected 30 SNPs for computation of prospective stat
Tried 30; threw out 0 with GRAMMAR chisq > 5
Assigning SNPs to 22 chunks for leave-out analysis
Each chunk is excluded when testing SNPs belonging to the chunk
  Batch-solving 52 systems of equations using conjugate gradient iteration
  iter 1:  time=0.68  rNorms/orig: (0.04,0.07)  res2s: 48.7902..70.329
  iter 2:  time=0.68  rNorms/orig: (0.002,0.007)  res2s: 49.1781..70.6337
  iter 3:  time=0.68  rNorms/orig: (9e-05,0.0005)  res2s: 49.1875..70.6346
  Converged at iter 3: rNorms/orig all < CGtol=0.0005
  Time breakdown: dgemm = 63.7%, memory/overhead = 36.3%

AvgPro: 0.844   AvgRetro: 0.839   Calibration: 1.006 (0.003)   (30 SNPs)
Ratio of medians: 1.013   Median of ratios: 1.002

Time for computing infinitesimal model assoc stats = 2.25897 sec

=== Estimating chip LD Scores using 400 indivs ===

Reducing sample size to 376 for memory alignment
WARNING: Only 380 indivs available; using all
Time for estimating chip LD Scores = 0.682689 sec

=== Reading LD Scores for calibration of Bayesian assoc stats ===

Looking up LD Scores...
  Looking for column header 'SNP': column number = 1
  Looking for column header 'LDSCORE': column number = 5
Found LD Scores for 171753/172878 SNPs

Estimating inflation of LINREG chisq stats using MLMe as reference...
Filtering to SNPs with chisq stats, LD Scores, and MAF > 0.01
# of SNPs passing filters before outlier removal: 171753/172878
Masking windows around outlier snps (chisq > 20.0)
# of SNPs remaining after outlier window removal: 171753/171753
Intercept of LD Score regression for ref stats:   1.003 (0.005)
Estimated attenuation: 0.409 (0.534)
Intercept of LD Score regression for cur stats: 1.004 (0.005)
Calibration factor (ref/cur) to multiply by:      0.999 (0.000)
LINREG intercept inflation = 1.00087

Calibration stats: mean and lambdaGC (over SNPs used in GRM)
  (note that both should be >1 because of polygenicity)
Mean BOLT_LMM_INF: 1.0074 (172878 good SNPs)   lambdaGC: 1.01336

=== Streaming genotypes to compute and write assoc stats at all SNPs ===

Time for streaming genotypes and writing output = 27.6488 sec

Total elapsed time for analysis = 37.4789 sec
           SNP  CHR      BP    GENPOS ALLELE1 ALLELE0    A1FREQ  F_MISS  \
0   rs79373928    1  801536  0.587220       G       T  0.014474     0.0   
1    rs4970382    1  840753  0.620827       C       T  0.406579     0.0   
2   rs13303222    1  849998  0.620827       A       G  0.196053     0.0   
3   rs72631889    1  851390  0.620827       T       G  0.034210     0.0   
4  rs192998324    1  862772  0.620827       G       A  0.027632     0.0   

       BETA        SE  P_BOLT_LMM_INF  
0  0.015560  0.258427           0.950  
1 -0.060199  0.059829           0.310  
2 -0.006768  0.078287           0.930  
3  0.315642  0.172246           0.067  
4 -0.227920  0.190562           0.230  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmmInfOnly_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmmInfOnly_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99%
Warning: 326740 lines skipped in --q-score-range data file.
 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/test_data.*.profile.
Continous Phenotype!
File or directory does not exist: SampleData1/Fold_0/SampleData1.lmmForceNonInf_stat
       FID      IID  Sex
0  HG00097  HG00097    2
1  HG00099  HG00099    2
2  HG00101  HG00101    1
3  HG00102  HG00102    2
4  HG00103  HG00103    1
380
380
         FID      IID  Sex       PC1       PC2       PC3       PC4       PC5  \
0    HG00097  HG00097    2 -0.001453  0.084820  0.006792  0.013653  0.027149   
1    HG00099  HG00099    2 -0.002017  0.089514 -0.022355  0.001888 -0.000037   
2    HG00101  HG00101    1 -0.000380  0.096056 -0.018231 -0.016026  0.012093   
3    HG00102  HG00102    2  0.000292  0.071832  0.018087 -0.045180  0.028123   
4    HG00103  HG00103    1 -0.008372  0.065005 -0.009089 -0.026468 -0.009184   
..       ...      ...  ...       ...       ...       ...       ...       ...   
375  NA20818  NA20818    2 -0.047156 -0.040644 -0.052693  0.021050 -0.013389   
376  NA20826  NA20826    2 -0.042629 -0.059404 -0.066130  0.006495 -0.009525   
377  NA20827  NA20827    1 -0.044060 -0.053125 -0.065463  0.015030 -0.004314   
378  NA20828  NA20828    2 -0.047621 -0.050577 -0.043164  0.003004 -0.016823   
379  NA20832  NA20832    2 -0.041535 -0.049826 -0.047877  0.005951 -0.003770   

          PC6  
0    0.032581  
1    0.009107  
2    0.019296  
3   -0.003620  
4   -0.030565  
..        ...  
375 -0.047403  
376  0.010779  
377  0.003873  
378  0.015832  
379 -0.023086  

[380 rows x 9 columns]
./bolt --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned --phenoFile=SampleData1/Fold_0/train_data.PHENO --phenoCol=PHENO --lmmForceNonInf --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz --covarFile=SampleData1/Fold_0/train_data.COV_PCA --qCovarCol=COV_{1:2} --statsFile=SampleData1/Fold_0/SampleData1.lmmForceNonInf_stat
                      +-----------------------------+
                      |                       ___   |
                      |   BOLT-LMM, v2.4.1   /_ /   |
                      |   November 16, 2022   /_/   |
                      |   Po-Ru Loh            //   |
                      |                        /    |
                      +-----------------------------+

Copyright (C) 2014-2022 Harvard University.
Distributed under the GNU GPLv3 open source license.

Compiled with USE_SSE: fast aligned memory access
Compiled with USE_MKL: Intel Math Kernel Library linear algebra
Boost version: 1_58

Command line options:

./bolt \
    --bfile=SampleData1/Fold_0/train_data.QC.clumped.pruned \
    --phenoFile=SampleData1/Fold_0/train_data.PHENO \
    --phenoCol=PHENO \
    --lmmForceNonInf \
    --LDscoresFile=tables/LDSCORE.1000G_EUR.tab.gz \
    --covarFile=SampleData1/Fold_0/train_data.COV_PCA \
    --qCovarCol=COV_{1:2} \
    --statsFile=SampleData1/Fold_0/SampleData1.lmmForceNonInf_stat 

Setting number of threads to 1
fam: SampleData1/Fold_0/train_data.QC.clumped.pruned.fam
bim(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
bed(s): SampleData1/Fold_0/train_data.QC.clumped.pruned.bed

=== Reading genotype data ===

Total indivs in PLINK data: Nbed = 380
Total indivs stored in memory: N = 380
Reading bim file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bim
    Read 172878 snps
Total snps in PLINK data: Mbed = 172878

Breakdown of SNP pre-filtering results:
  172878 SNPs to include in model (i.e., GRM)
  0 additional non-GRM SNPs loaded
  0 excluded SNPs
Allocating 172878 x 380/4 bytes to store genotypes
Reading genotypes and performing QC filtering on snps and indivs...
Reading bed file #1: SampleData1/Fold_0/train_data.QC.clumped.pruned.bed
    Expecting 16423410 (+3) bytes for 380 indivs, 172878 snps
WARNING: Genetic map appears to be in cM units; rescaling by 0.01
Total indivs after QC: 380
Total post-QC SNPs: M = 172878
  Variance component 1: 172878 post-QC SNPs (name: 'modelSnps')
Time for SnpData setup = 1.15584 sec

=== Reading phenotype and covariate data ===

Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.COV_PCA
Read data for 380 indivs (ignored 0 without genotypes) from:
  SampleData1/Fold_0/train_data.PHENO
Number of indivs with no missing phenotype(s) to use: 380
    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 380
Singular values of covariate matrix:
    S[0] = 36.3836
    S[1] = 5.21943
    S[2] = 0.995369
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           375.206815
Dimension of all-1s proj space (Nused-1): 379
Time for covariate data setup + Bolt initialization = 0.391653 sec

Phenotype 1:   N = 380   mean = 170.14   std = 0.945865

=== Computing linear regression (LINREG) stats ===

Time for computing LINREG stats = 0.152288 sec

=== Estimating variance parameters ===

Using CGtol of 0.005 for this step
Using default number of random trials: 15 (for Nused = 380)

Estimating MC scaling f_REML at log(delta) = 1.09384, h2 = 0.25...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.42  rNorms/orig: (0.02,0.02)  res2s: 882.719..149.989
  iter 2:  time=0.42  rNorms/orig: (0.0005,0.001)  res2s: 883.769..150.228
  Converged at iter 2: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 36.4%, memory/overhead = 63.6%
  MCscaling: logDelta = 1.09, h2 = 0.250, f = 0.00255218

Estimating MC scaling f_REML at log(delta) = -0.00476781, h2 = 0.5...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.42  rNorms/orig: (0.04,0.05)  res2s: 206.958..66.4433
  iter 2:  time=0.41  rNorms/orig: (0.002,0.005)  res2s: 207.859..66.8373
  iter 3:  time=0.41  rNorms/orig: (9e-05,0.0003)  res2s: 207.871..66.8446
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 35.4%, memory/overhead = 64.6%
  MCscaling: logDelta = -0.00, h2 = 0.500, f = 0.000475493

Estimating MC scaling f_REML at log(delta) = -0.256314, h2 = 0.562557...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.41  rNorms/orig: (0.04,0.05)  res2s: 142.182..50.8157
  iter 2:  time=0.38  rNorms/orig: (0.002,0.006)  res2s: 142.949..51.1908
  iter 3:  time=0.37  rNorms/orig: (0.0001,0.0005)  res2s: 142.962..51.1995
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 35.9%, memory/overhead = 64.1%
  MCscaling: logDelta = -0.26, h2 = 0.563, f = 5.80701e-05

Estimating MC scaling f_REML at log(delta) = -0.291308, h2 = 0.571149...
  Batch-solving 16 systems of equations using conjugate gradient iteration
  iter 1:  time=0.38  rNorms/orig: (0.04,0.05)  res2s: 134.763..48.8336
  iter 2:  time=0.37  rNorms/orig: (0.002,0.007)  res2s: 135.51..49.2044
  iter 3:  time=0.36  rNorms/orig: (0.0001,0.0005)  res2s: 135.523..49.2132
  Converged at iter 3: rNorms/orig all < CGtol=0.005
  Time breakdown: dgemm = 35.9%, memory/overhead = 64.1%
  MCscaling: logDelta = -0.29, h2 = 0.571, f = 3.23365e-06

Secant iteration for h2 estimation converged in 2 steps
Estimated (pseudo-)heritability: h2g = 0.571
To more precisely estimate variance parameters and estimate s.e., use --reml
Variance params: sigma^2_K = 0.406791, logDelta = -0.291308, f = 3.23365e-06

Time for fitting variance components = 5.40124 sec

=== Computing mixed model assoc stats (inf. model) ===

Selected 30 SNPs for computation of prospective stat
Tried 30; threw out 0 with GRAMMAR chisq > 5
Assigning SNPs to 22 chunks for leave-out analysis
Each chunk is excluded when testing SNPs belonging to the chunk
  Batch-solving 52 systems of equations using conjugate gradient iteration
  iter 1:  time=0.68  rNorms/orig: (0.04,0.07)  res2s: 48.7902..70.329
  iter 2:  time=0.68  rNorms/orig: (0.002,0.007)  res2s: 49.1781..70.6337
  iter 3:  time=0.68  rNorms/orig: (9e-05,0.0005)  res2s: 49.1875..70.6346
  Converged at iter 3: rNorms/orig all < CGtol=0.0005
  Time breakdown: dgemm = 63.8%, memory/overhead = 36.2%

AvgPro: 0.844   AvgRetro: 0.839   Calibration: 1.006 (0.003)   (30 SNPs)
Ratio of medians: 1.013   Median of ratios: 1.002

Time for computing infinitesimal model assoc stats = 2.23798 sec

=== Estimating chip LD Scores using 400 indivs ===

Reducing sample size to 376 for memory alignment
WARNING: Only 380 indivs available; using all
Time for estimating chip LD Scores = 0.614924 sec

=== Reading LD Scores for calibration of Bayesian assoc stats ===

Looking up LD Scores...
  Looking for column header 'SNP': column number = 1
  Looking for column header 'LDSCORE': column number = 5
Found LD Scores for 171753/172878 SNPs

Estimating inflation of LINREG chisq stats using MLMe as reference...
Filtering to SNPs with chisq stats, LD Scores, and MAF > 0.01
# of SNPs passing filters before outlier removal: 171753/172878
Masking windows around outlier snps (chisq > 20.0)
# of SNPs remaining after outlier window removal: 171753/171753
Intercept of LD Score regression for ref stats:   1.003 (0.005)
Estimated attenuation: 0.409 (0.534)
Intercept of LD Score regression for cur stats: 1.004 (0.005)
Calibration factor (ref/cur) to multiply by:      0.999 (0.000)
LINREG intercept inflation = 1.00087

=== Estimating mixture parameters by cross-validation ===

Setting maximum number of iterations to 250 for this step
Max CV folds to compute = 5 (to have > 10000 samples)

====> Starting CV fold 1 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.5614
    S[1] = 4.66512
    S[2] = 0.894787
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.575111
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.10 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (37.61,49.27)
  iter 3:  time=0.80 for 18 active reps  approxLL diffs: (1.23,1.36)
  iter 4:  time=0.83 for 18 active reps  approxLL diffs: (0.14,0.21)
  iter 5:  time=0.80 for 18 active reps  approxLL diffs: (0.01,0.01)
  iter 6:  time=0.19 for  1 active reps  approxLL diffs: (0.00,0.00)
  Converged at iter 6: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 26.0%, memory/overhead = 74.0%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.452956 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.006720
  f2=0.3, p=0.5: 0.006720
  f2=0.5, p=0.2: 0.006720
            ...
 f2=0.3, p=0.01: 0.006710

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.586308
  Absolute prediction MSE using standard LMM:         0.582368
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.582368
    Absolute pred MSE using   f2=0.5, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.5, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.5, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.5, p=0.05: 0.582368
    Absolute pred MSE using  f2=0.5, p=0.02: 0.582369
    Absolute pred MSE using  f2=0.5, p=0.01: 0.582372
    Absolute pred MSE using   f2=0.3, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.3, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.3, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.3, p=0.05: 0.582369
    Absolute pred MSE using  f2=0.3, p=0.02: 0.582370
    Absolute pred MSE using  f2=0.3, p=0.01: 0.582374
    Absolute pred MSE using   f2=0.1, p=0.5: 0.582368
    Absolute pred MSE using   f2=0.1, p=0.2: 0.582368
    Absolute pred MSE using   f2=0.1, p=0.1: 0.582368
    Absolute pred MSE using  f2=0.1, p=0.05: 0.582369
    Absolute pred MSE using  f2=0.1, p=0.02: 0.582371
    Absolute pred MSE using  f2=0.1, p=0.01: 0.582370

====> End CV fold 1: 18 remaining param pair(s) <====

Estimated proportion of variance explained using inf model: 0.007
Relative improvement in prediction MSE using non-inf model: 0.000

====> Starting CV fold 2 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.3248
    S[1] = 4.70299
    S[2] = 0.886997
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.595029
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.10 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (34.03,41.62)
  iter 3:  time=0.80 for 18 active reps  approxLL diffs: (1.12,1.16)
  iter 4:  time=0.80 for 18 active reps  approxLL diffs: (0.13,0.17)
  iter 5:  time=0.80 for 18 active reps  approxLL diffs: (0.01,0.01)
  Converged at iter 5: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 26.3%, memory/overhead = 73.7%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.45387 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.003876
  f2=0.3, p=0.5: 0.003875
  f2=0.5, p=0.2: 0.003874
            ...
 f2=0.1, p=0.01: 0.003516

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.884599
  Absolute prediction MSE using standard LMM:         0.883686
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.883686
    Absolute pred MSE using   f2=0.5, p=0.5: 0.883686
    Absolute pred MSE using   f2=0.5, p=0.2: 0.883690
    Absolute pred MSE using   f2=0.5, p=0.1: 0.883699
    Absolute pred MSE using  f2=0.5, p=0.05: 0.883717
    Absolute pred MSE using  f2=0.5, p=0.02: 0.883771
    Absolute pred MSE using  f2=0.5, p=0.01: 0.883859
    Absolute pred MSE using   f2=0.3, p=0.5: 0.883687
    Absolute pred MSE using   f2=0.3, p=0.2: 0.883697
    Absolute pred MSE using   f2=0.3, p=0.1: 0.883715
    Absolute pred MSE using  f2=0.3, p=0.05: 0.883752
    Absolute pred MSE using  f2=0.3, p=0.02: 0.883861
    Absolute pred MSE using  f2=0.3, p=0.01: 0.884043
    Absolute pred MSE using   f2=0.1, p=0.5: 0.883691
    Absolute pred MSE using   f2=0.1, p=0.2: 0.883709
    Absolute pred MSE using   f2=0.1, p=0.1: 0.883739
    Absolute pred MSE using  f2=0.1, p=0.05: 0.883800
    Absolute pred MSE using  f2=0.1, p=0.02: 0.883989
    Absolute pred MSE using  f2=0.1, p=0.01: 0.884320

====> End CV fold 2: 18 remaining param pair(s) <====

====> Starting CV fold 3 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.9844
    S[1] = 4.5875
    S[2] = 0.882915
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.590344
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.10 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (34.43,42.51)
  iter 3:  time=0.80 for 18 active reps  approxLL diffs: (1.13,1.20)
  iter 4:  time=0.80 for 18 active reps  approxLL diffs: (0.13,0.17)
  iter 5:  time=0.80 for 18 active reps  approxLL diffs: (0.01,0.01)
  Converged at iter 5: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 26.4%, memory/overhead = 73.6%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.454183 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.007021
  f2=0.3, p=0.5: 0.007020
  f2=0.5, p=0.2: 0.007016
            ...
 f2=0.1, p=0.01: 0.006257

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.83708
  Absolute prediction MSE using standard LMM:         0.825938
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.825938
    Absolute pred MSE using   f2=0.5, p=0.5: 0.825938
    Absolute pred MSE using   f2=0.5, p=0.2: 0.825947
    Absolute pred MSE using   f2=0.5, p=0.1: 0.825969
    Absolute pred MSE using  f2=0.5, p=0.05: 0.826011
    Absolute pred MSE using  f2=0.5, p=0.02: 0.826137
    Absolute pred MSE using  f2=0.5, p=0.01: 0.826335
    Absolute pred MSE using   f2=0.3, p=0.5: 0.825940
    Absolute pred MSE using   f2=0.3, p=0.2: 0.825965
    Absolute pred MSE using   f2=0.3, p=0.1: 0.826007
    Absolute pred MSE using  f2=0.3, p=0.05: 0.826091
    Absolute pred MSE using  f2=0.3, p=0.02: 0.826336
    Absolute pred MSE using  f2=0.3, p=0.01: 0.826722
    Absolute pred MSE using   f2=0.1, p=0.5: 0.825949
    Absolute pred MSE using   f2=0.1, p=0.2: 0.825991
    Absolute pred MSE using   f2=0.1, p=0.1: 0.826062
    Absolute pred MSE using  f2=0.1, p=0.05: 0.826201
    Absolute pred MSE using  f2=0.1, p=0.02: 0.826608
    Absolute pred MSE using  f2=0.1, p=0.01: 0.827253

====> End CV fold 3: 18 remaining param pair(s) <====

====> Starting CV fold 4 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.5614
    S[1] = 4.66474
    S[2] = 0.892081
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.591407
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.09 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (35.87,44.89)
  iter 3:  time=0.80 for 18 active reps  approxLL diffs: (1.18,1.29)
  iter 4:  time=0.80 for 18 active reps  approxLL diffs: (0.14,0.19)
  iter 5:  time=0.80 for 18 active reps  approxLL diffs: (0.01,0.01)
  iter 6:  time=0.19 for  1 active reps  approxLL diffs: (0.00,0.00)
  Converged at iter 6: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 25.8%, memory/overhead = 74.2%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.454353 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.006447
  f2=0.3, p=0.5: 0.006445
  f2=0.5, p=0.2: 0.006440
            ...
 f2=0.1, p=0.01: 0.005512

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.721953
  Absolute prediction MSE using standard LMM:         0.718542
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.718542
    Absolute pred MSE using   f2=0.5, p=0.5: 0.718542
    Absolute pred MSE using   f2=0.5, p=0.2: 0.718550
    Absolute pred MSE using   f2=0.5, p=0.1: 0.718565
    Absolute pred MSE using  f2=0.5, p=0.05: 0.718597
    Absolute pred MSE using  f2=0.5, p=0.02: 0.718689
    Absolute pred MSE using  f2=0.5, p=0.01: 0.718837
    Absolute pred MSE using   f2=0.3, p=0.5: 0.718544
    Absolute pred MSE using   f2=0.3, p=0.2: 0.718562
    Absolute pred MSE using   f2=0.3, p=0.1: 0.718594
    Absolute pred MSE using  f2=0.3, p=0.05: 0.718656
    Absolute pred MSE using  f2=0.3, p=0.02: 0.718840
    Absolute pred MSE using  f2=0.3, p=0.01: 0.719137
    Absolute pred MSE using   f2=0.1, p=0.5: 0.718551
    Absolute pred MSE using   f2=0.1, p=0.2: 0.718582
    Absolute pred MSE using   f2=0.1, p=0.1: 0.718634
    Absolute pred MSE using  f2=0.1, p=0.05: 0.718738
    Absolute pred MSE using  f2=0.1, p=0.02: 0.719050
    Absolute pred MSE using  f2=0.1, p=0.01: 0.719587

====> End CV fold 4: 18 remaining param pair(s) <====

====> Starting CV fold 5 <====

    Using quantitative covariate: COV_1
    Using quantitative covariate: COV_2
    Using quantitative covariate: CONST_ALL_ONES
Number of individuals used in analysis: Nused = 304
Singular values of covariate matrix:
    S[0] = 32.2773
    S[1] = 4.70983
    S[2] = 0.892752
Total covariate vectors: C = 3
Total independent covariate vectors: Cindep = 3

=== Initializing Bolt object: projecting and normalizing SNPs ===
NOTE: Using all-1s vector (constant term) in addition to specified covariates
Number of chroms with >= 1 good SNP: 22
Average norm of projected SNPs:           299.583395
Dimension of all-1s proj space (Nused-1): 303
  Beginning variational Bayes
  iter 1:  time=1.12 for 18 active reps
  iter 2:  time=0.80 for 18 active reps  approxLL diffs: (37.87,48.73)
  iter 3:  time=0.91 for 18 active reps  approxLL diffs: (1.23,1.38)
  iter 4:  time=0.90 for 18 active reps  approxLL diffs: (0.14,0.21)
  iter 5:  time=0.82 for 18 active reps  approxLL diffs: (0.01,0.01)
  iter 6:  time=0.19 for  1 active reps  approxLL diffs: (0.00,0.00)
  Converged at iter 6: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 25.6%, memory/overhead = 74.4%
Computing predictions on left-out cross-validation fold
Time for computing predictions = 0.453788 sec

Average PVEs obtained by param pairs tested (high to low):
  f2=0.5, p=0.5: 0.007272
  f2=0.3, p=0.5: 0.007270
  f2=0.5, p=0.2: 0.007265
            ...
 f2=0.1, p=0.01: 0.006266

Detailed CV fold results:
  Absolute prediction MSE baseline (covariates only): 0.550938
  Absolute prediction MSE using standard LMM:         0.545112
  Absolute prediction MSE, fold-best  f2=0.5, p=0.5:  0.545112
    Absolute pred MSE using   f2=0.5, p=0.5: 0.545112
    Absolute pred MSE using   f2=0.5, p=0.2: 0.545117
    Absolute pred MSE using   f2=0.5, p=0.1: 0.545127
    Absolute pred MSE using  f2=0.5, p=0.05: 0.545148
    Absolute pred MSE using  f2=0.5, p=0.02: 0.545211
    Absolute pred MSE using  f2=0.5, p=0.01: 0.545314
    Absolute pred MSE using   f2=0.3, p=0.5: 0.545114
    Absolute pred MSE using   f2=0.3, p=0.2: 0.545125
    Absolute pred MSE using   f2=0.3, p=0.1: 0.545146
    Absolute pred MSE using  f2=0.3, p=0.05: 0.545188
    Absolute pred MSE using  f2=0.3, p=0.02: 0.545312
    Absolute pred MSE using  f2=0.3, p=0.01: 0.545519
    Absolute pred MSE using   f2=0.1, p=0.5: 0.545118
    Absolute pred MSE using   f2=0.1, p=0.2: 0.545138
    Absolute pred MSE using   f2=0.1, p=0.1: 0.545173
    Absolute pred MSE using  f2=0.1, p=0.05: 0.545242
    Absolute pred MSE using  f2=0.1, p=0.02: 0.545452
    Absolute pred MSE using  f2=0.1, p=0.01: 0.545825

====> End CV fold 5: 18 remaining param pair(s) <====

Optimal mixture parameters according to CV: f2 = 0.5, p = 0.5

Time for estimating mixture parameters = 53.0219 sec

=== Computing Bayesian mixed model assoc stats with mixture prior ===

Assigning SNPs to 22 chunks for leave-out analysis
Each chunk is excluded when testing SNPs belonging to the chunk
  Beginning variational Bayes
  iter 1:  time=1.20 for 22 active reps
  iter 2:  time=0.91 for 22 active reps  approxLL diffs: (44.97,45.17)
  iter 3:  time=0.98 for 22 active reps  approxLL diffs: (1.48,1.49)
  iter 4:  time=1.00 for 22 active reps  approxLL diffs: (0.17,0.17)
  iter 5:  time=0.90 for 22 active reps  approxLL diffs: (0.01,0.01)
  Converged at iter 5: approxLL diffs each have been < LLtol=0.01
  Time breakdown: dgemm = 26.3%, memory/overhead = 73.7%
Filtering to SNPs with chisq stats, LD Scores, and MAF > 0.01
# of SNPs passing filters before outlier removal: 171753/172878
Masking windows around outlier snps (chisq > 20.0)
# of SNPs remaining after outlier window removal: 171753/171753
Intercept of LD Score regression for ref stats:   1.003 (0.005)
Estimated attenuation: 0.409 (0.534)
Intercept of LD Score regression for cur stats: 0.997 (0.005)
Calibration factor (ref/cur) to multiply by:      1.006 (0.000)

Time for computing Bayesian mixed model assoc stats = 5.7944 sec

Calibration stats: mean and lambdaGC (over SNPs used in GRM)
  (note that both should be >1 because of polygenicity)
Mean BOLT_LMM_INF: 1.0074 (172878 good SNPs)   lambdaGC: 1.01336
Mean BOLT_LMM: 1.00721 (172878 good SNPs)   lambdaGC: 1.01326

=== Streaming genotypes to compute and write assoc stats at all SNPs ===

Time for streaming genotypes and writing output = 1.77796 sec

Total elapsed time for analysis = 70.5482 sec
           SNP  CHR      BP    GENPOS ALLELE1 ALLELE0    A1FREQ  F_MISS  \
0   rs79373928    1  801536  0.587220       G       T  0.014474     0.0   
1    rs4970382    1  840753  0.620827       C       T  0.406579     0.0   
2   rs13303222    1  849998  0.620827       A       G  0.196053     0.0   
3   rs72631889    1  851390  0.620827       T       G  0.034210     0.0   
4  rs192998324    1  862772  0.620827       G       A  0.027632     0.0   

       BETA        SE  P_BOLT_LMM_INF  P_BOLT_LMM  
0  0.015560  0.258427           0.950       0.950  
1 -0.060199  0.059829           0.310       0.310  
2 -0.006768  0.078287           0.930       0.930  
3  0.315642  0.172246           0.067       0.067  
4 -0.227920  0.190562           0.230       0.230  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmmForceNonInf_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/BOLT-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/BOLT-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score SampleData1/Fold_0/SampleData1.lmmForceNonInf_stat 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/BOLT-LMM/test_data.*.profile.
Continous Phenotype!

Repeat the process for each fold.#

Change the foldnumber variable.

#foldnumber = sys.argv[1]
foldnumber = "0"  # Setting 'foldnumber' to "0"

Or uncomment the following line:

# foldnumber = sys.argv[1]
python BOLT-LMM.py 0
python BOLT-LMM.py 1
python BOLT-LMM.py 2
python BOLT-LMM.py 3
python BOLT-LMM.py 4

The following files should exist after the execution:

  1. SampleData1/Fold_0/BOLT-LMM/Results.csv

  2. SampleData1/Fold_1/BOLT-LMM/Results.csv

  3. SampleData1/Fold_2/BOLT-LMM/Results.csv

  4. SampleData1/Fold_3/BOLT-LMM/Results.csv

  5. SampleData1/Fold_4/BOLT-LMM/Results.csv

Check the results file for each fold.#

import os

 
# List of file names to check for existence
f = [
    "./"+filedirec+"/Fold_0"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_1"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_2"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_3"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_4"+os.sep+result_directory+"Results.csv",
]

 

# Loop through each file name in the list
for loop in range(0,5):
    # Check if the file exists in the specified directory for the given fold
    if os.path.exists(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv"):
        temp = pd.read_csv(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv")
        print("Fold_",loop, "Yes, the file exists.")
        #print(temp.tail())
        print("Number of P-values processed: ",len(temp))
        # Print a message indicating that the file exists
    
    else:
        # Print a message indicating that the file does not exist
        print("Fold_",loop, "No, the file does not exist.")
Fold_ 0 Yes, the file exists.
Number of P-values processed:  60
Fold_ 1 No, the file does not exist.
Fold_ 2 Yes, the file exists.
Number of P-values processed:  60
Fold_ 3 Yes, the file exists.
Number of P-values processed:  60
Fold_ 4 Yes, the file exists.
Number of P-values processed:  60

Sum the results for each fold.#

print("We have to ensure when we sum the entries across all Folds, the same rows are merged!")

def sum_and_average_columns(data_frames):
    """Sum and average numerical columns across multiple DataFrames, and keep non-numerical columns unchanged."""
    # Initialize DataFrame to store the summed results for numerical columns
    summed_df = pd.DataFrame()
    non_numerical_df = pd.DataFrame()
    
    for df in data_frames:
        # Identify numerical and non-numerical columns
        numerical_cols = df.select_dtypes(include=[np.number]).columns
        non_numerical_cols = df.select_dtypes(exclude=[np.number]).columns
        
        # Sum numerical columns
        if summed_df.empty:
            summed_df = pd.DataFrame(0, index=range(len(df)), columns=numerical_cols)
        
        summed_df[numerical_cols] = summed_df[numerical_cols].add(df[numerical_cols], fill_value=0)
        
        # Keep non-numerical columns (take the first non-numerical entry for each column)
        if non_numerical_df.empty:
            non_numerical_df = df[non_numerical_cols]
        else:
            non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
    
    # Divide the summed values by the number of dataframes to get the average
    averaged_df = summed_df / len(data_frames)
    
    # Combine numerical and non-numerical DataFrames
    result_df = pd.concat([averaged_df, non_numerical_df], axis=1)
    
    return result_df

from functools import reduce

import os
import pandas as pd
from functools import reduce

def find_common_rows(allfoldsframe):
    # Define the performance columns that need to be excluded
    performance_columns = [
        'Train_null_model', 'Train_pure_prs', 'Train_best_model',
        'Test_pure_prs', 'Test_null_model', 'Test_best_model'
    ]
    important_columns = [
        'clump_p1',
        'clump_r2',
        'clump_kb',
        'p_window_size',
        'p_slide_size',
        'p_LD_threshold',
        'pvalue',
        'referencepanel',
        'PRSice-2_Model',
        'BOLTmodel',
     
        'numberofpca',
        'tempalpha',
        'l1weight',
        
     
        
    ]
    # Function to remove performance columns from a DataFrame
    def drop_performance_columns(df):
        return df.drop(columns=performance_columns, errors='ignore')
    
    def get_important_columns(df ):
        existing_columns = [col for col in important_columns if col in df.columns]
        if existing_columns:
            return df[existing_columns].copy()
        else:
            return pd.DataFrame()

    # Drop performance columns from all DataFrames in the list
    allfoldsframe_dropped = [drop_performance_columns(df) for df in allfoldsframe]
    
    # Get the important columns.
    allfoldsframe_dropped = [get_important_columns(df) for df in allfoldsframe_dropped]    
    
    # Iteratively find common rows and track unique and common rows
    common_rows = allfoldsframe_dropped[0]
    for i in range(1, len(allfoldsframe_dropped)):
        # Get the next DataFrame
        next_df = allfoldsframe_dropped[i]

        # Count unique rows in the current DataFrame and the next DataFrame
        unique_in_common = common_rows.shape[0]
        unique_in_next = next_df.shape[0]

        # Find common rows between the current common_rows and the next DataFrame
        common_rows = pd.merge(common_rows, next_df, how='inner')
    
        # Count the common rows after merging
        common_count = common_rows.shape[0]

        # Print the unique and common row counts
        print(f"Iteration {i}:")
        print(f"Unique rows in current common DataFrame: {unique_in_common}")
        print(f"Unique rows in next DataFrame: {unique_in_next}")
        print(f"Common rows after merge: {common_count}\n")
    # Now that we have the common rows, extract these from the original DataFrames
 
    extracted_common_rows_frames = []
    for original_df in allfoldsframe:
        # Merge the common rows with the original DataFrame, keeping only the rows that match the common rows
        extracted_common_rows = pd.merge(common_rows, original_df, how='inner', on=common_rows.columns.tolist())
        
        # Add the DataFrame with the extracted common rows to the list
        extracted_common_rows_frames.append(extracted_common_rows)

    # Print the number of rows in the common DataFrames
    for i, df in enumerate(extracted_common_rows_frames):
        print(f"DataFrame {i + 1} with extracted common rows has {df.shape[0]} rows.")

    # Return the list of DataFrames with extracted common rows
    return extracted_common_rows_frames



# Example usage (assuming allfoldsframe is populated as shown earlier):
allfoldsframe = []

# Loop through each file name in the list
for loop in range(0, 5):
    # Check if the file exists in the specified directory for the given fold
    file_path = os.path.join(filedirec, "Fold_" + str(loop), result_directory, "Results.csv")
    if os.path.exists(file_path):
        allfoldsframe.append(pd.read_csv(file_path))
        # Print a message indicating that the file exists
        print("Fold_", loop, "Yes, the file exists.")
    else:
        # Print a message indicating that the file does not exist
        print("Fold_", loop, "No, the file does not exist.")

# Find the common rows across all folds and return the list of extracted common rows
extracted_common_rows_list = find_common_rows(allfoldsframe)
 
# Sum the values column-wise
# For string values, do not sum it the values are going to be the same for each fold.
# Only sum the numeric values.

divided_result = sum_and_average_columns(extracted_common_rows_list)
  
print(divided_result)

 
We have to ensure when we sum the entries across all Folds, the same rows are merged!
Fold_ 0 Yes, the file exists.
Fold_ 1 No, the file does not exist.
Fold_ 2 No, the file does not exist.
Fold_ 3 No, the file does not exist.
Fold_ 4 No, the file does not exist.
DataFrame 1 with extracted common rows has 60 rows.
    clump_p1  clump_r2  clump_kb  p_window_size  p_slide_size  p_LD_threshold  \
0        1.0       0.1     200.0          200.0          50.0            0.25   
1        1.0       0.1     200.0          200.0          50.0            0.25   
2        1.0       0.1     200.0          200.0          50.0            0.25   
3        1.0       0.1     200.0          200.0          50.0            0.25   
4        1.0       0.1     200.0          200.0          50.0            0.25   
5        1.0       0.1     200.0          200.0          50.0            0.25   
6        1.0       0.1     200.0          200.0          50.0            0.25   
7        1.0       0.1     200.0          200.0          50.0            0.25   
8        1.0       0.1     200.0          200.0          50.0            0.25   
9        1.0       0.1     200.0          200.0          50.0            0.25   
10       1.0       0.1     200.0          200.0          50.0            0.25   
11       1.0       0.1     200.0          200.0          50.0            0.25   
12       1.0       0.1     200.0          200.0          50.0            0.25   
13       1.0       0.1     200.0          200.0          50.0            0.25   
14       1.0       0.1     200.0          200.0          50.0            0.25   
15       1.0       0.1     200.0          200.0          50.0            0.25   
16       1.0       0.1     200.0          200.0          50.0            0.25   
17       1.0       0.1     200.0          200.0          50.0            0.25   
18       1.0       0.1     200.0          200.0          50.0            0.25   
19       1.0       0.1     200.0          200.0          50.0            0.25   
20       1.0       0.1     200.0          200.0          50.0            0.25   
21       1.0       0.1     200.0          200.0          50.0            0.25   
22       1.0       0.1     200.0          200.0          50.0            0.25   
23       1.0       0.1     200.0          200.0          50.0            0.25   
24       1.0       0.1     200.0          200.0          50.0            0.25   
25       1.0       0.1     200.0          200.0          50.0            0.25   
26       1.0       0.1     200.0          200.0          50.0            0.25   
27       1.0       0.1     200.0          200.0          50.0            0.25   
28       1.0       0.1     200.0          200.0          50.0            0.25   
29       1.0       0.1     200.0          200.0          50.0            0.25   
30       1.0       0.1     200.0          200.0          50.0            0.25   
31       1.0       0.1     200.0          200.0          50.0            0.25   
32       1.0       0.1     200.0          200.0          50.0            0.25   
33       1.0       0.1     200.0          200.0          50.0            0.25   
34       1.0       0.1     200.0          200.0          50.0            0.25   
35       1.0       0.1     200.0          200.0          50.0            0.25   
36       1.0       0.1     200.0          200.0          50.0            0.25   
37       1.0       0.1     200.0          200.0          50.0            0.25   
38       1.0       0.1     200.0          200.0          50.0            0.25   
39       1.0       0.1     200.0          200.0          50.0            0.25   
40       1.0       0.1     200.0          200.0          50.0            0.25   
41       1.0       0.1     200.0          200.0          50.0            0.25   
42       1.0       0.1     200.0          200.0          50.0            0.25   
43       1.0       0.1     200.0          200.0          50.0            0.25   
44       1.0       0.1     200.0          200.0          50.0            0.25   
45       1.0       0.1     200.0          200.0          50.0            0.25   
46       1.0       0.1     200.0          200.0          50.0            0.25   
47       1.0       0.1     200.0          200.0          50.0            0.25   
48       1.0       0.1     200.0          200.0          50.0            0.25   
49       1.0       0.1     200.0          200.0          50.0            0.25   
50       1.0       0.1     200.0          200.0          50.0            0.25   
51       1.0       0.1     200.0          200.0          50.0            0.25   
52       1.0       0.1     200.0          200.0          50.0            0.25   
53       1.0       0.1     200.0          200.0          50.0            0.25   
54       1.0       0.1     200.0          200.0          50.0            0.25   
55       1.0       0.1     200.0          200.0          50.0            0.25   
56       1.0       0.1     200.0          200.0          50.0            0.25   
57       1.0       0.1     200.0          200.0          50.0            0.25   
58       1.0       0.1     200.0          200.0          50.0            0.25   
59       1.0       0.1     200.0          200.0          50.0            0.25   

          pvalue  numberofpca  tempalpha  l1weight  numberofvariants  \
0   1.000000e-10          6.0        0.1       0.1               0.0   
1   3.359818e-10          6.0        0.1       0.1               0.0   
2   1.128838e-09          6.0        0.1       0.1               0.0   
3   3.792690e-09          6.0        0.1       0.1               0.0   
4   1.274275e-08          6.0        0.1       0.1               0.0   
5   4.281332e-08          6.0        0.1       0.1               0.0   
6   1.438450e-07          6.0        0.1       0.1               0.0   
7   4.832930e-07          6.0        0.1       0.1               0.0   
8   1.623777e-06          6.0        0.1       0.1               0.0   
9   5.455595e-06          6.0        0.1       0.1               0.0   
10  1.832981e-05          6.0        0.1       0.1               0.0   
11  6.158482e-05          6.0        0.1       0.1               0.0   
12  2.069138e-04          6.0        0.1       0.1               0.0   
13  6.951928e-04          6.0        0.1       0.1               0.0   
14  2.335721e-03          6.0        0.1       0.1               0.0   
15  7.847600e-03          6.0        0.1       0.1               0.0   
16  2.636651e-02          6.0        0.1       0.1               0.0   
17  8.858668e-02          6.0        0.1       0.1               0.0   
18  2.976351e-01          6.0        0.1       0.1               0.0   
19  1.000000e+00          6.0        0.1       0.1               0.0   
20  1.000000e-10          6.0        0.1       0.1               0.0   
21  3.359818e-10          6.0        0.1       0.1               0.0   
22  1.128838e-09          6.0        0.1       0.1               0.0   
23  3.792690e-09          6.0        0.1       0.1               0.0   
24  1.274275e-08          6.0        0.1       0.1               0.0   
25  4.281332e-08          6.0        0.1       0.1               0.0   
26  1.438450e-07          6.0        0.1       0.1               0.0   
27  4.832930e-07          6.0        0.1       0.1               0.0   
28  1.623777e-06          6.0        0.1       0.1               0.0   
29  5.455595e-06          6.0        0.1       0.1               0.0   
30  1.832981e-05          6.0        0.1       0.1               0.0   
31  6.158482e-05          6.0        0.1       0.1               0.0   
32  2.069138e-04          6.0        0.1       0.1               0.0   
33  6.951928e-04          6.0        0.1       0.1               0.0   
34  2.335721e-03          6.0        0.1       0.1               0.0   
35  7.847600e-03          6.0        0.1       0.1               0.0   
36  2.636651e-02          6.0        0.1       0.1               0.0   
37  8.858668e-02          6.0        0.1       0.1               0.0   
38  2.976351e-01          6.0        0.1       0.1               0.0   
39  1.000000e+00          6.0        0.1       0.1               0.0   
40  1.000000e-10          6.0        0.1       0.1               0.0   
41  3.359818e-10          6.0        0.1       0.1               0.0   
42  1.128838e-09          6.0        0.1       0.1               0.0   
43  3.792690e-09          6.0        0.1       0.1               0.0   
44  1.274275e-08          6.0        0.1       0.1               0.0   
45  4.281332e-08          6.0        0.1       0.1               0.0   
46  1.438450e-07          6.0        0.1       0.1               0.0   
47  4.832930e-07          6.0        0.1       0.1               0.0   
48  1.623777e-06          6.0        0.1       0.1               0.0   
49  5.455595e-06          6.0        0.1       0.1               0.0   
50  1.832981e-05          6.0        0.1       0.1               0.0   
51  6.158482e-05          6.0        0.1       0.1               0.0   
52  2.069138e-04          6.0        0.1       0.1               0.0   
53  6.951928e-04          6.0        0.1       0.1               0.0   
54  2.335721e-03          6.0        0.1       0.1               0.0   
55  7.847600e-03          6.0        0.1       0.1               0.0   
56  2.636651e-02          6.0        0.1       0.1               0.0   
57  8.858668e-02          6.0        0.1       0.1               0.0   
58  2.976351e-01          6.0        0.1       0.1               0.0   
59  1.000000e+00          6.0        0.1       0.1               0.0   

    Train_pure_prs  Train_null_model  Train_best_model  Test_pure_prs  \
0         0.002293          0.227477          0.602341   2.369136e-04   
1         0.002266          0.227477          0.638998   2.858599e-04   
2         0.002376          0.227477          0.697855   3.838787e-04   
3         0.002409          0.227477          0.734995   2.228255e-04   
4         0.002395          0.227477          0.776750   6.394029e-05   
5         0.002321          0.227477          0.812544   6.596809e-05   
6         0.002306          0.227477          0.835596   1.222748e-04   
7         0.002205          0.227477          0.864166   1.204769e-04   
8         0.002158          0.227477          0.883590   9.661765e-05   
9         0.002167          0.227477          0.909570   5.687403e-05   
10        0.002181          0.227477          0.933095   4.772806e-05   
11        0.002209          0.227477          0.947256   7.291404e-05   
12        0.002186          0.227477          0.961601   8.539601e-07   
13        0.002202          0.227477          0.975360  -3.215678e-05   
14        0.002203          0.227477          0.982642  -1.351810e-05   
15        0.002183          0.227477          0.989179  -6.292480e-07   
16        0.002147          0.227477          0.993029   1.580836e-05   
17        0.002146          0.227477          0.995876   2.640523e-05   
18        0.002128          0.227477          0.997875   1.470785e-05   
19        0.002117          0.227477          0.999334   1.192114e-05   
20        0.002293          0.227477          0.602341   2.369136e-04   
21        0.002266          0.227477          0.638998   2.858599e-04   
22        0.002376          0.227477          0.697855   3.838787e-04   
23        0.002409          0.227477          0.734995   2.228255e-04   
24        0.002395          0.227477          0.776750   6.394029e-05   
25        0.002321          0.227477          0.812544   6.596809e-05   
26        0.002306          0.227477          0.835596   1.222748e-04   
27        0.002205          0.227477          0.864166   1.204769e-04   
28        0.002158          0.227477          0.883590   9.661765e-05   
29        0.002167          0.227477          0.909570   5.687403e-05   
30        0.002181          0.227477          0.933095   4.772806e-05   
31        0.002209          0.227477          0.947256   7.291404e-05   
32        0.002186          0.227477          0.961601   8.539601e-07   
33        0.002202          0.227477          0.975360  -3.215678e-05   
34        0.002203          0.227477          0.982642  -1.351810e-05   
35        0.002183          0.227477          0.989179  -6.292480e-07   
36        0.002147          0.227477          0.993029   1.580836e-05   
37        0.002146          0.227477          0.995876   2.640523e-05   
38        0.002128          0.227477          0.997875   1.470785e-05   
39        0.002117          0.227477          0.999334   1.192114e-05   
40        0.002293          0.227477          0.602341   2.369136e-04   
41        0.002266          0.227477          0.638998   2.858599e-04   
42        0.002376          0.227477          0.697855   3.838787e-04   
43        0.002409          0.227477          0.734995   2.228255e-04   
44        0.002395          0.227477          0.776750   6.394029e-05   
45        0.002321          0.227477          0.812544   6.596809e-05   
46        0.002306          0.227477          0.835596   1.222748e-04   
47        0.002205          0.227477          0.864166   1.204769e-04   
48        0.002158          0.227477          0.883590   9.661765e-05   
49        0.002167          0.227477          0.909570   5.687403e-05   
50        0.002181          0.227477          0.933095   4.772806e-05   
51        0.002209          0.227477          0.947256   7.291404e-05   
52        0.002186          0.227477          0.961601   8.539601e-07   
53        0.002202          0.227477          0.975360  -3.215678e-05   
54        0.002203          0.227477          0.982642  -1.351810e-05   
55        0.002183          0.227477          0.989179  -6.292480e-07   
56        0.002147          0.227477          0.993029   1.580836e-05   
57        0.002146          0.227477          0.995876   2.640523e-05   
58        0.002128          0.227477          0.997875   1.470785e-05   
59        0.002117          0.227477          0.999334   1.192114e-05   

    Test_null_model  Test_best_model       BOLTmodel  
0           0.14297         0.181276             lmm  
1           0.14297         0.199960             lmm  
2           0.14297         0.209241             lmm  
3           0.14297         0.158463             lmm  
4           0.14297         0.103519             lmm  
5           0.14297         0.126593             lmm  
6           0.14297         0.142460             lmm  
7           0.14297         0.166682             lmm  
8           0.14297         0.217751             lmm  
9           0.14297         0.185704             lmm  
10          0.14297         0.182166             lmm  
11          0.14297         0.184267             lmm  
12          0.14297         0.162743             lmm  
13          0.14297         0.149901             lmm  
14          0.14297         0.171236             lmm  
15          0.14297         0.191330             lmm  
16          0.14297         0.199744             lmm  
17          0.14297         0.216609             lmm  
18          0.14297         0.207610             lmm  
19          0.14297         0.210739             lmm  
20          0.14297         0.181276      lmmInfOnly  
21          0.14297         0.199960      lmmInfOnly  
22          0.14297         0.209241      lmmInfOnly  
23          0.14297         0.158463      lmmInfOnly  
24          0.14297         0.103519      lmmInfOnly  
25          0.14297         0.126593      lmmInfOnly  
26          0.14297         0.142460      lmmInfOnly  
27          0.14297         0.166682      lmmInfOnly  
28          0.14297         0.217751      lmmInfOnly  
29          0.14297         0.185704      lmmInfOnly  
30          0.14297         0.182166      lmmInfOnly  
31          0.14297         0.184267      lmmInfOnly  
32          0.14297         0.162743      lmmInfOnly  
33          0.14297         0.149901      lmmInfOnly  
34          0.14297         0.171236      lmmInfOnly  
35          0.14297         0.191330      lmmInfOnly  
36          0.14297         0.199744      lmmInfOnly  
37          0.14297         0.216609      lmmInfOnly  
38          0.14297         0.207610      lmmInfOnly  
39          0.14297         0.210739      lmmInfOnly  
40          0.14297         0.181276  lmmForceNonInf  
41          0.14297         0.199960  lmmForceNonInf  
42          0.14297         0.209241  lmmForceNonInf  
43          0.14297         0.158463  lmmForceNonInf  
44          0.14297         0.103519  lmmForceNonInf  
45          0.14297         0.126593  lmmForceNonInf  
46          0.14297         0.142460  lmmForceNonInf  
47          0.14297         0.166682  lmmForceNonInf  
48          0.14297         0.217751  lmmForceNonInf  
49          0.14297         0.185704  lmmForceNonInf  
50          0.14297         0.182166  lmmForceNonInf  
51          0.14297         0.184267  lmmForceNonInf  
52          0.14297         0.162743  lmmForceNonInf  
53          0.14297         0.149901  lmmForceNonInf  
54          0.14297         0.171236  lmmForceNonInf  
55          0.14297         0.191330  lmmForceNonInf  
56          0.14297         0.199744  lmmForceNonInf  
57          0.14297         0.216609  lmmForceNonInf  
58          0.14297         0.207610  lmmForceNonInf  
59          0.14297         0.210739  lmmForceNonInf  

Results#

1. Reporting Based on Best Training Performance:#

  • One can report the results based on the best performance of the training data. For example, if for a specific combination of hyperparameters, the training performance is high, report the corresponding test performance.

  • Example code:

    df = divided_result.sort_values(by='Train_best_model', ascending=False)
    print(df.iloc[0].to_markdown())
    

Binary Phenotypes Result Analysis#

You can find the performance quality for binary phenotype using the following template:

PerformanceBinary

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.5

Moderate Performance

0.6 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most phenotypes, results typically fall in the moderate train and moderate test performance category.

Continuous Phenotypes Result Analysis#

You can find the performance quality for continuous phenotypes using the following template:

PerformanceContinous

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.2

Moderate Performance

0.3 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most continuous phenotypes, results typically fall in the moderate train and moderate test performance category.

2. Reporting Generalized Performance:#

  • One can also report the generalized performance by calculating the difference between the training and test performance, and the sum of the test and training performance. Report the result or hyperparameter combination for which the sum is high and the difference is minimal.

  • Example code:

    df = divided_result.copy()
    df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
    df['Sum'] = df['Train_best_model'] + df['Test_best_model']
    
    sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
    print(sorted_df.iloc[0].to_markdown())
    

3. Reporting Hyperparameters Affecting Test and Train Performance:#

  • Find the hyperparameters that have more than one unique value and calculate their correlation with the following columns to understand how they are affecting the performance of train and test sets:

    • Train_null_model

    • Train_pure_prs

    • Train_best_model

    • Test_pure_prs

    • Test_null_model

    • Test_best_model

4. Other Analysis#

  1. Once you have the results, you can find how hyperparameters affect the model performance.

  2. Analysis, like overfitting and underfitting, can be performed as well.

  3. The way you are going to report the results can vary.

  4. Results can be visualized, and other patterns in the data can be explored.

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib notebook

import matplotlib
import numpy as np
import matplotlib.pyplot as plt

df = divided_result.sort_values(by='Train_best_model', ascending=False)
print("1. Reporting Based on Best Training Performance:\n")
print(df.iloc[0].to_markdown())


 
df = divided_result.copy()

# Plot Train and Test best models against p-values
plt.figure(figsize=(10, 6))
plt.plot(df['pvalue'], df['Train_best_model'], label='Train_best_model', marker='o', color='royalblue')
plt.plot(df['pvalue'], df['Test_best_model'], label='Test_best_model', marker='o', color='darkorange')

# Highlight the p-value where both train and test are high
best_index = df[['Train_best_model']].sum(axis=1).idxmax()
best_pvalue = df.loc[best_index, 'pvalue']
best_train = df.loc[best_index, 'Train_best_model']
best_test = df.loc[best_index, 'Test_best_model']

# Use dark colors for the circles
plt.scatter(best_pvalue, best_train, color='darkred', s=100, label=f'Best Performance (Train)', edgecolor='black', zorder=5)
plt.scatter(best_pvalue, best_test, color='darkblue', s=100, label=f'Best Performance (Test)', edgecolor='black', zorder=5)

# Annotate the best performance with p-value, train, and test values
plt.text(best_pvalue, best_train, f'p={best_pvalue:.4g}\nTrain={best_train:.4g}', ha='right', va='bottom', fontsize=9, color='darkred')
plt.text(best_pvalue, best_test, f'p={best_pvalue:.4g}\nTest={best_test:.4g}', ha='right', va='top', fontsize=9, color='darkblue')

# Calculate Difference and Sum
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']

# Sort the DataFrame
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
#sorted_df = df.sort_values(by=[ 'Difference','Sum'], ascending=[  True,False])

# Highlight the general performance
general_index = sorted_df.index[0]
general_pvalue = sorted_df.loc[general_index, 'pvalue']
general_train = sorted_df.loc[general_index, 'Train_best_model']
general_test = sorted_df.loc[general_index, 'Test_best_model']

plt.scatter(general_pvalue, general_train, color='darkgreen', s=150, label='General Performance (Train)', edgecolor='black', zorder=6)
plt.scatter(general_pvalue, general_test, color='darkorange', s=150, label='General Performance (Test)', edgecolor='black', zorder=6)

# Annotate the general performance with p-value, train, and test values
plt.text(general_pvalue, general_train, f'p={general_pvalue:.4g}\nTrain={general_train:.4g}', ha='left', va='bottom', fontsize=9, color='darkgreen')
plt.text(general_pvalue, general_test, f'p={general_pvalue:.4g}\nTest={general_test:.4g}', ha='left', va='top', fontsize=9, color='darkorange')

# Add labels and legend
plt.xlabel('p-value')
plt.ylabel('Model Performance')
plt.title('Train vs Test Best Models')
plt.legend()
plt.show()
 




print("2. Reporting Generalized Performance:\n")
df = divided_result.copy()
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
print(sorted_df.iloc[0].to_markdown())


print("3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':\n")

print("3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.")

print("3. We performed this analysis for those hyperparameters that have more than one unique value.")

correlation_columns = [
 'Train_null_model', 'Train_pure_prs', 'Train_best_model',
 'Test_pure_prs', 'Test_null_model', 'Test_best_model'
]

hyperparams = [col for col in divided_result.columns if len(divided_result[col].unique()) > 1]
hyperparams = list(set(hyperparams+correlation_columns))
 
# Separate numeric and string columns
numeric_hyperparams = [col for col in hyperparams if pd.api.types.is_numeric_dtype(divided_result[col])]
string_hyperparams = [col for col in hyperparams if pd.api.types.is_string_dtype(divided_result[col])]


# Encode string columns using one-hot encoding
divided_result_encoded = pd.get_dummies(divided_result, columns=string_hyperparams)

# Combine numeric hyperparams with the new one-hot encoded columns
encoded_columns = [col for col in divided_result_encoded.columns if col.startswith(tuple(string_hyperparams))]
hyperparams = numeric_hyperparams + encoded_columns
 

# Calculate correlations
correlations = divided_result_encoded[hyperparams].corr()
 
# Display correlation of hyperparameters with train/test performance columns
hyperparam_correlations = correlations.loc[hyperparams, correlation_columns]
 
hyperparam_correlations = hyperparam_correlations.fillna(0)

# Plotting the correlation heatmap
plt.figure(figsize=(12, 8))
ax = sns.heatmap(hyperparam_correlations, annot=True, cmap='viridis', fmt='.2f', cbar=True)
ax.set_xticklabels(ax.get_xticklabels(), rotation=90, ha='right')

# Rotate y-axis labels to horizontal
#ax.set_yticklabels(ax.get_yticklabels(), rotation=0, va='center')

plt.title('Correlation of Hyperparameters with Train/Test Performance')
plt.show() 

sns.set_theme(style="whitegrid")  # Choose your preferred style
pairplot = sns.pairplot(divided_result_encoded[hyperparams],hue = 'Test_best_model', palette='viridis')

# Adjust the figure size
pairplot.fig.set_size_inches(15, 15)  # You can adjust the size as needed

for ax in pairplot.axes.flatten():
    ax.set_xlabel(ax.get_xlabel(), rotation=90, ha='right')  # X-axis labels vertical
    #ax.set_ylabel(ax.get_ylabel(), rotation=0, va='bottom')  # Y-axis labels horizontal

# Show the plot
plt.show()
1. Reporting Based on Best Training Performance:

|                  | 59                    |
|:-----------------|:----------------------|
| clump_p1         | 1.0                   |
| clump_r2         | 0.1                   |
| clump_kb         | 200.0                 |
| p_window_size    | 200.0                 |
| p_slide_size     | 50.0                  |
| p_LD_threshold   | 0.25                  |
| pvalue           | 1.0                   |
| numberofpca      | 6.0                   |
| tempalpha        | 0.1                   |
| l1weight         | 0.1                   |
| numberofvariants | 0.0                   |
| Train_pure_prs   | 0.0021171291324699    |
| Train_null_model | 0.2274767654484456    |
| Train_best_model | 0.9993342811094132    |
| Test_pure_prs    | 1.192113731751654e-05 |
| Test_null_model  | 0.1429700622403754    |
| Test_best_model  | 0.2107386792427508    |
| BOLTmodel        | lmmForceNonInf        |
2. Reporting Generalized Performance:

|                  | 17                    |
|:-----------------|:----------------------|
| clump_p1         | 1.0                   |
| clump_r2         | 0.1                   |
| clump_kb         | 200.0                 |
| p_window_size    | 200.0                 |
| p_slide_size     | 50.0                  |
| p_LD_threshold   | 0.25                  |
| pvalue           | 0.0885866790410083    |
| numberofpca      | 6.0                   |
| tempalpha        | 0.1                   |
| l1weight         | 0.1                   |
| numberofvariants | 0.0                   |
| Train_pure_prs   | 0.0021456782406613    |
| Train_null_model | 0.2274767654484456    |
| Train_best_model | 0.9958762340331948    |
| Test_pure_prs    | 2.640522907393361e-05 |
| Test_null_model  | 0.1429700622403754    |
| Test_best_model  | 0.2166086418607637    |
| BOLTmodel        | lmm                   |
| Difference       | 0.7792675921724311    |
| Sum              | 1.2124848758939586    |
3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':

3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.
3. We performed this analysis for those hyperparameters that have more than one unique value.