GEMMA-LMM#

In this notebook, we will use GEMMA to calculate the PRS. For more information, visit the official GEMMA repository:
genetics-statistics/GEMMA

The GEMMA LMM model requires calculating the related matrix, and then takes the GWAS file, genotype data, related matrix, and covariates and generates a new set of BETAS, which are then used to calculate PRS.

GWAS file processing for GEMMA-LMM for Binary Phenotypes.#

When the effect size relates to disease risk and is thus given as an odds ratio (OR) rather than BETA (for continuous traits), the PRS is computed as a product of ORs. To simplify this calculation, take the natural logarithm of the OR so that the PRS can be computed using summation instead.

import os
import pandas as pd
import numpy as np
import sys

#filedirec = sys.argv[1]
filedirec = "SampleData1"
#filedirec = "asthma"
#filedirec = "asthma_19"
#filedirec = "migraine_0"

def check_phenotype_is_binary_or_continous(filedirec):
    # Read the processed quality controlled file for a phenotype
    df = pd.read_csv(filedirec+os.sep+filedirec+'_QC.fam',sep="\s+",header=None)
    column_values = df[5].unique()
 
    if len(set(column_values)) == 2:
        return "Binary"
    else:
        return "Continous"



# Read the GWAS file.
GWAS = filedirec + os.sep + filedirec+".gz"
df = pd.read_csv(GWAS,compression= "gzip",sep="\s+")

 
if "BETA" in df.columns.to_list():
    # For Continous Phenotype.
    df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'BETA', 'INFO', 'MAF']]

else:
    df["BETA"] = np.log(df["OR"])
    
    df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'BETA', 'INFO', 'MAF']]

df['Z'] = df['BETA'] / df['SE'] 
transformed_df = df[['SNP', 'N', 'Z', 'A1', 'A2']].copy()
transformed_df.columns = ['SNP', 'N', 'Z', 'INC_ALLELE', 'DEC_ALLELE']
  


transformed_df.to_csv(filedirec + os.sep +"gemma.txt",sep="\t",index=False)
print(transformed_df.head().to_markdown())
print("Length of DataFrame!",len(transformed_df))
|    | SNP        |      N |         Z | INC_ALLELE   | DEC_ALLELE   |
|---:|:-----------|-------:|----------:|:-------------|:-------------|
|  0 | rs3131962  | 388028 | -0.701213 | A            | G            |
|  1 | rs12562034 | 388028 |  0.20854  | A            | G            |
|  2 | rs4040617  | 388028 | -0.790957 | G            | A            |
|  3 | rs79373928 | 388028 |  0.241718 | G            | T            |
|  4 | rs11240779 | 388028 |  0.53845  | G            | A            |
Length of DataFrame! 499617

Define Hyperparameters#

Define hyperparameters to be optimized and set initial values.

Extract Valid SNPs from Clumped File#

For Windows, download gwak, and for Linux, the awk command is sufficient. For Windows, GWAK is required. You can download it from here. Get it and place it in the same directory.

Execution Path#

At this stage, we have the genotype training data newtrainfilename = "train_data.QC" and genotype test data newtestfilename = "test_data.QC".

We modified the following variables:

  1. filedirec = "SampleData1" or filedirec = sys.argv[1]

  2. foldnumber = "0" or foldnumber = sys.argv[2] for HPC.

Only these two variables can be modified to execute the code for specific data and specific folds. Though the code can be executed separately for each fold on HPC and separately for each dataset, it is recommended to execute it for multiple diseases and one fold at a time. Here’s the corrected text in Markdown format:

P-values#

PRS calculation relies on P-values. SNPs with low P-values, indicating a high degree of association with a specific trait, are considered for calculation.

You can modify the code below to consider a specific set of P-values and save the file in the same format.

We considered the following parameters:

  • Minimum P-value: 1e-10

  • Maximum P-value: 1.0

  • Minimum exponent: 10 (Minimum P-value in exponent)

  • Number of intervals: 100 (Number of intervals to be considered)

The code generates an array of logarithmically spaced P-values:

import numpy as np
import os

minimumpvalue = 10  # Minimum exponent for P-values
numberofintervals = 100  # Number of intervals to be considered

allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced P-values

print("Minimum P-value:", allpvalues[0])
print("Maximum P-value:", allpvalues[-1])

count = 1
with open(os.path.join(folddirec, 'range_list'), 'w') as file:
    for value in allpvalues:
        file.write(f'pv_{value} 0 {value}\n')  # Writing range information to the 'range_list' file
        count += 1

pvaluefile = os.path.join(folddirec, 'range_list')

In this code:

  • minimumpvalue defines the minimum exponent for P-values.

  • numberofintervals specifies how many intervals to consider.

  • allpvalues generates an array of P-values spaced logarithmically.

  • The script writes these P-values to a file named range_list in the specified directory.

from operator import index
import pandas as pd
import numpy as np
import os
import subprocess
import sys
import pandas as pd
import statsmodels.api as sm
import pandas as pd
from sklearn.metrics import roc_auc_score, confusion_matrix
from statsmodels.stats.contingency_tables import mcnemar

def create_directory(directory):
    """Function to create a directory if it doesn't exist."""
    if not os.path.exists(directory):  # Checking if the directory doesn't exist
        os.makedirs(directory)  # Creating the directory if it doesn't exist
    return directory  # Returning the created or existing directory

 
#foldnumber = sys.argv[1]
foldnumber = "0"  # Setting 'foldnumber' to "0"

folddirec = filedirec + os.sep + "Fold_" + foldnumber  # Creating a directory path for the specific fold
trainfilename = "train_data"  # Setting the name of the training data file
newtrainfilename = "train_data.QC"  # Setting the name of the new training data file

testfilename = "test_data"  # Setting the name of the test data file
newtestfilename = "test_data.QC"  # Setting the name of the new test data file

# Number of PCA to be included as a covariate.
numberofpca = ["6"]  # Setting the number of PCA components to be included

# Clumping parameters.
clump_p1 = [1]  # List containing clump parameter 'p1'
clump_r2 = [0.1]  # List containing clump parameter 'r2'
clump_kb = [200]  # List containing clump parameter 'kb'

# Pruning parameters.
p_window_size = [200]  # List containing pruning parameter 'window_size'
p_slide_size = [50]  # List containing pruning parameter 'slide_size'
p_LD_threshold = [0.25]  # List containing pruning parameter 'LD_threshold'

# Kindly note that the number of p-values to be considered varies, and the actual p-value depends on the dataset as well.
# We will specify the range list here.

minimumpvalue = 10  # Minimum p-value in exponent
numberofintervals = 20  # Number of intervals to be considered
allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True)  # Generating an array of logarithmically spaced p-values



count = 1
with open(folddirec + os.sep + 'range_list', 'w') as file:
    for value in allpvalues:
        file.write(f'pv_{value} 0 {value}\n')  # Writing range information to the 'range_list' file
        count = count + 1

pvaluefile = folddirec + os.sep + 'range_list'

# Initializing an empty DataFrame with specified column names
prs_result = pd.DataFrame(columns=["clump_p1", "clump_r2", "clump_kb", "p_window_size", "p_slide_size", "p_LD_threshold",
                                   "pvalue", "numberofpca","numberofvariants","Train_pure_prs", "Train_null_model", "Train_best_model",
                                   "Test_pure_prs", "Test_null_model", "Test_best_model"])

Define Helper Functions#

  1. Perform Clumping and Pruning

  2. Calculate PCA Using Plink

  3. Fit Binary Phenotype and Save Results

  4. Fit Continuous Phenotype and Save Results

import os
import subprocess
import pandas as pd
import statsmodels.api as sm
from sklearn.metrics import explained_variance_score


def perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,numberofpca, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    
    command = [
    "./plink",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.run(command)
    # First perform pruning and then clumping and the pruning.

    command = [
    "./plink",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--clump-p1", c1_val,
    "--extract", traindirec+os.sep+trainfilename+".prune.in",
    "--clump-r2", c2_val,
    "--clump-kb", c3_val,
    "--clump", filedirec+os.sep+filedirec+".txt",
    "--clump-snp-field", "SNP",
    "--clump-field", "P",
    "--out", traindirec+os.sep+trainfilename
    ]    
    subprocess.run(command)

    # Extract the valid SNPs from th clumped file.
    # For windows download gwak for linux awk commmand is sufficient.
    ### For windows require GWAK.
    ### https://sourceforge.net/projects/gnuwin32/
    ##3 Get it and place it in the same direc.
    #os.system("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")
    #print("gawk "+"\""+"NR!=1{print $3}"+"\"  "+ traindirec+os.sep+trainfilename+".clumped >  "+traindirec+os.sep+trainfilename+".valid.snp")

    #Linux:
    command = f"awk 'NR!=1{{print $3}}' {traindirec}{os.sep}{trainfilename}.clumped > {traindirec}{os.sep}{trainfilename}.valid.snp"
    os.system(command)
    
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+newtrainfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+newtrainfilename+".clumped.pruned"
    ]
    subprocess.run(command)
    
    command = [
    "./plink",
    "--make-bed",
    "--bfile", traindirec+os.sep+testfilename,
    "--indep-pairwise", p1_val, p2_val, p3_val,
    "--extract", traindirec+os.sep+trainfilename+".valid.snp",
    "--out", traindirec+os.sep+testfilename+".clumped.pruned"
    ]
    subprocess.run(command)    
    
    
 
def calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p):
    
    # Calculate the PRS for the test data using the same set of SNPs and also calculate the PCA.


    # Also extract the PCA at this point.
    # PCA are calculated afer clumping and pruining.
    command = [
        "./plink",
        "--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", folddirec+os.sep+testfilename
    ]
    subprocess.run(command)


    command = [
    "./plink",
        "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
        # Select the final variants after clumping and pruning.        
        "--extract", traindirec+os.sep+trainfilename+".valid.snp",
        "--pca", p,
        "--out", traindirec+os.sep+trainfilename
    ]
    subprocess.run(command)

# This function fit the binary model on the PRS.
def fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,gemmamodel,relatedmatrixname,lmmmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues

    # Merge the covariates, pca and phenotypes.
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)


    ## Scale the covariates!
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.metrics import explained_variance_score
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
    #covandpcs_train.iloc[:, 2:] = normalized_values_test 
    
    
    tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_test= pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test  = scaler.transform(covandpcs_test.iloc[:, 2:])
    #covandpcs_test.iloc[:, 2:] = normalized_values_test     
    
    
    
    
    tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
    l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1}) 
    phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
      
    for tempalpha in tempalphas:
        for l1weight in l1weights:

            
            try:
                null_model =  sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                #null_model =  sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
            
            except:
                print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
            
            from sklearn.metrics import roc_auc_score, confusion_matrix
            from sklearn.metrics import r2_score
            
            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
            
           
            
            global prs_result 
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
        
                try:
                    model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                    #model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
                
                except:
                    continue


                
                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))    
 

                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:])) 
 
        
                from sklearn.metrics import roc_auc_score, confusion_matrix

                prs_result = prs_result._append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca":p, 
                    
                    "gemmamodel":gemmamodel,
                    "relatedmatrixname":relatedmatrixname,
                    "lmmmodel":str(lmmmodel),
                     
                    
                    "tempalpha":str(tempalpha),
                    "l1weight":str(l1weight),
                     

                    "Train_pure_prs":roc_auc_score(phenotype_train["Phenotype"].values,prs_train['SCORE'].values),
                    "Train_null_model":roc_auc_score(phenotype_train["Phenotype"].values,train_null_predicted.values),
                    "Train_best_model":roc_auc_score(phenotype_train["Phenotype"].values,train_best_predicted.values),
                    
                    "Test_pure_prs":roc_auc_score(phenotype_test["Phenotype"].values,prs_test['SCORE'].values),
                    "Test_null_model":roc_auc_score(phenotype_test["Phenotype"].values,test_null_predicted.values),
                    "Test_best_model":roc_auc_score(phenotype_test["Phenotype"].values,test_best_predicted.values),
                    
                }, ignore_index=True)

          
                prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
     
    return

# This function fit the binary model on the PRS.
def fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p, gemmamodel,relatedmatrixname,lmmmodel,p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    threshold_values = allpvalues

    # Merge the covariates, pca and phenotypes.
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)


    ## Scale the covariates!
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.metrics import explained_variance_score
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
    #covandpcs_train.iloc[:, 2:] = normalized_values_test 
    
    tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_test= pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test  = scaler.transform(covandpcs_test.iloc[:, 2:])
    #covandpcs_test.iloc[:, 2:] = normalized_values_test     
    
    
    
    
    tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
    l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]

    tempalphas = [0.1]
    l1weights = [0.1]

    #phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1}) 
    #phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
      
    for tempalpha in tempalphas:
        for l1weight in l1weights:

            
            try:
                #null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
                #null_model =  sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
            except:
                print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
                continue

            train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
            
            from sklearn.metrics import roc_auc_score, confusion_matrix
            from sklearn.metrics import r2_score
            
            test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
            
            
            
            global prs_result 
            for i in threshold_values:
                try:
                    prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue

                prs_train['FID'] = prs_train['FID'].astype(str)
                prs_train['IID'] = prs_train['IID'].astype(str)
                try:
                    prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
                except:
                    continue
                prs_test['FID'] = prs_test['FID'].astype(str)
                prs_test['IID'] = prs_test['IID'].astype(str)
                pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
                pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
        
                try:
                    #model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
                    model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
                
                except:
                    continue


                
                train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))    
                test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:])) 
 
        
                from sklearn.metrics import roc_auc_score, confusion_matrix

                prs_result = prs_result._append({
                    "clump_p1": c1_val,
                    "clump_r2": c2_val,
                    "clump_kb": c3_val,
                    "p_window_size": p1_val,
                    "p_slide_size": p2_val,
                    "p_LD_threshold": p3_val,
                    "pvalue": i,
                    "numberofpca":p, 
                    
                    "gemmamodel":gemmamodel,
                    "relatedmatrixname":relatedmatrixname,
                    "lmmmodel":str(lmmmodel),
                    
                    "tempalpha":str(tempalpha),
                    "l1weight":str(l1weight),
                     

                    "Train_pure_prs":explained_variance_score(phenotype_train["Phenotype"],prs_train['SCORE'].values),
                    "Train_null_model":explained_variance_score(phenotype_train["Phenotype"],train_null_predicted),
                    "Train_best_model":explained_variance_score(phenotype_train["Phenotype"],train_best_predicted),
                    
                    "Test_pure_prs":explained_variance_score(phenotype_test["Phenotype"],prs_test['SCORE'].values),
                    "Test_null_model":explained_variance_score(phenotype_test["Phenotype"],test_null_predicted),
                    "Test_best_model":explained_variance_score(phenotype_test["Phenotype"],test_best_predicted),
                    
                }, ignore_index=True)

          
                prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
     
    return

Execute GEMMA-LMM#

# Define a global variable to store results
prs_result = pd.DataFrame()
def transform_gemma_llm_data(traindirec, newtrainfilename,p,gemmamodel,relatexmatrix, lmmmodel,p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
    ### First perform clumping on the file and save the clumpled file.
    perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,p, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
    
    #newtrainfilename = newtrainfilename+".clumped.pruned"
    #testfilename = testfilename+".clumped.pruned"
    
    
    #clupmedfile = traindirec+os.sep+newtrainfilename+".clump"
    #prunedfile = traindirec+os.sep+newtrainfilename+".clumped.pruned"

        
    # Also extract the PCA at this point for both test and training data.
    calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p)

    #Extract p-values from the GWAS file.
    # Command for Linux.
    os.system("awk "+"\'"+"{print $3,$8}"+"\'"+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")

    # Command for windows.
    ### For windows get GWAK.
    ### https://sourceforge.net/projects/gnuwin32/
    ##3 Get it and place it in the same direc.
    #os.system("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")
    #print("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt >  ./"+traindirec+os.sep+"SNP.pvalue")

    #exit(0)
 
    # Merge the covariates, pca and phenotypes.
    tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_train = pd.DataFrame()
    phenotype_train["Phenotype"] = tempphenotype_train[5].values
    pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
    covariate_train.fillna(0, inplace=True)
    covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
    covariate_train['FID'] = covariate_train['FID'].astype(str)
    pcs_train['FID'] = pcs_train['FID'].astype(str)
    covariate_train['IID'] = covariate_train['IID'].astype(str)
    pcs_train['IID'] = pcs_train['IID'].astype(str)
    covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
    covandpcs_train.fillna(0, inplace=True)
    covandpcs_train.to_csv(traindirec+os.sep+trainfilename+".COV_PCA",sep="\t",index=False)
    covandpcs_train.iloc[:, 2:].to_csv(traindirec+os.sep+trainfilename+".COV_PCAgemma", header=False, index=False,sep="\t")
    from sklearn.preprocessing import MinMaxScaler
    from sklearn.metrics import explained_variance_score
    scaler = MinMaxScaler()
    normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])    
    
    tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
    phenotype_test= pd.DataFrame()
    phenotype_test["Phenotype"] = tempphenotype_test[5].values
    pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
    covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
    covariate_test.fillna(0, inplace=True)
    covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
    covariate_test['FID'] = covariate_test['FID'].astype(str)
    pcs_test['FID'] = pcs_test['FID'].astype(str)
    covariate_test['IID'] = covariate_test['IID'].astype(str)
    pcs_test['IID'] = pcs_test['IID'].astype(str)
    covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
    covandpcs_test.fillna(0, inplace=True)
    normalized_values_test  = scaler.transform(covandpcs_test.iloc[:, 2:])
    covandpcs_test.to_csv(traindirec+os.sep+testfilename+".COV_PCA",sep="\t",index=False)
    covandpcs_test.iloc[:, 2:].to_csv(traindirec+os.sep+testfilename+".COV_PCAgemma", header=False, index=False,sep="\t")   
    
    
    relatedmatrixname = ""
    # Specify the output name for the relatedmatrix
    
    if relatedmatrix=="1":
        relatedmatrixname = "centered"
        outputrelatedmatrixname = "gemma.cXX.txt"
         
    else: 
        relatedmatrixname = "standardized"
        outputrelatedmatrixname = "gemma.sXX.txt"        
    
    import shutil
    dir_path = os.path.join("output", traindirec, "gemma-" + gemmamodel)
    if os.path.exists(dir_path):
        shutil.rmtree(dir_path)
        pass
    
        
        
    try:
        os.makedirs(os.path.join("output", traindirec, "gemma-"+gemmamodel), exist_ok=True)
    except OSError as e:
        print(f"Error creating directory: {e}")
    
    
    if gemmamodel=='lmm':
        subprocess.run(["./gemma",
        "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
        "-gk", relatedmatrix,
        "-o", traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma",
        "-beta",filedirec + os.sep +"gemma.txt",
        #'-c', traindirec+os.sep+trainfilename+".COV_PCAgemma",
                       
        ])
        
        command = [
            "./gemma",
             "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
            "-beta",filedirec + os.sep +"gemma.txt",
            #'-c', traindirec+os.sep+trainfilename+".COV_PCAgemma",
            "-k",  "output"+os.sep+ traindirec+os.sep+"gemma-"+gemmamodel+os.sep+ outputrelatedmatrixname,   
            "-lmm",str(lmmmodel),
            "-o", traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma"


        ]
        
        print(" ".join(command))
        subprocess.run(command)
        try:
            temp = pd.read_csv("output"+os.sep+traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma.assoc.txt",sep="\s+")
        except:
            print("GWAS not generated!")
            return
        
        if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
            temp['beta'] = np.log(temp['beta'])
            temp['beta'] = temp['beta'].replace([np.inf, -np.inf], np.nan)  # Replace inf and -inf with NaN
            temp['beta'] = temp['beta'].fillna(0) 
        else:
            pass   
        print(temp.head())       
        temp.iloc[:,[1,4,7]].to_csv("output"+os.sep+traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma.assoc.txt",sep="\t",index=False)
  
        command = [
            "./plink",
             "--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
            ### SNP column = 3, Effect allele column 1 = 4, OR column=9
            "--score", "output"+os.sep+traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma.assoc.txt", "1", "2", "3", "header",
            "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
            "--extract", traindirec+os.sep+trainfilename+".valid.snp",
            "--out", traindirec+os.sep+Name+os.sep+trainfilename
        ]
        #exit(0)
        subprocess.run(command)



        command = [
            "./plink",
            "--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
            ### SNP column = 3, Effect allele column 1 = 4, Beta column=12
            "--score",  "output"+os.sep+traindirec+os.sep+"gemma-"+gemmamodel+os.sep+"gemma.assoc.txt", "1", "2", "3", "header",
            "--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
            "--extract", traindirec+os.sep+trainfilename+".valid.snp",
            "--out", folddirec+os.sep+Name+os.sep+testfilename
        ]
        subprocess.run(command)     
       
    
        if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
            print("Binary Phenotype!")
            fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,gemmamodel,relatedmatrixname,lmmmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
        else:
            print("Continous Phenotype!")
            fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p,gemmamodel,relatedmatrixname,lmmmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
         
     
    
 

 
gemmamodels = ['lmm']
relatedmatrixs = ["1","2"]
lmmmodels = ["1","3"]

result_directory = "GEMMA-LMM"
# Nested loops to iterate over different parameter values
create_directory(folddirec+os.sep+result_directory)
for p1_val in p_window_size:
 for p2_val in p_slide_size: 
  for p3_val in p_LD_threshold:
   for c1_val in clump_p1:
    for c2_val in clump_r2:
     for c3_val in clump_kb:
      for p in numberofpca:
        for gemmamodel in gemmamodels:
         for relatedmatrix in relatedmatrixs:
          for lmmmodel in lmmmodels:
           transform_gemma_llm_data(folddirec, newtrainfilename, p,gemmamodel,relatedmatrix,lmmmodel, str(p1_val), str(p2_val), str(p3_val), str(c1_val), str(c2_val), str(c3_val), result_directory, pvaluefile)
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --indep-pairwise 200 50 0.25
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
491952 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Pruned 18860 variants from chromosome 1, leaving 20363.
Pruned 19645 variants from chromosome 2, leaving 20067.
Pruned 16414 variants from chromosome 3, leaving 17080.
Pruned 15404 variants from chromosome 4, leaving 16035.
Pruned 14196 variants from chromosome 5, leaving 15379.
Pruned 19368 variants from chromosome 6, leaving 14770.
Pruned 13110 variants from chromosome 7, leaving 13997.
Pruned 12431 variants from chromosome 8, leaving 12966.
Pruned 9982 variants from chromosome 9, leaving 11477.
Pruned 11999 variants from chromosome 10, leaving 12850.
Pruned 12156 variants from chromosome 11, leaving 12221.
Pruned 10979 variants from chromosome 12, leaving 12050.
Pruned 7923 variants from chromosome 13, leaving 9247.
Pruned 7624 variants from chromosome 14, leaving 8448.
Pruned 7387 variants from chromosome 15, leaving 8145.
Pruned 8063 variants from chromosome 16, leaving 8955.
Pruned 7483 variants from chromosome 17, leaving 8361.
Pruned 6767 variants from chromosome 18, leaving 8240.
Pruned 6438 variants from chromosome 19, leaving 6432.
Pruned 5972 variants from chromosome 20, leaving 7202.
Pruned 3426 variants from chromosome 21, leaving 4102.
Pruned 3801 variants from chromosome 22, leaving 4137.
Pruning complete.  239428 of 491952 variants removed.
Marker lists written to SampleData1/Fold_0/train_data.prune.in and
SampleData1/Fold_0/train_data.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --clump SampleData1/SampleData1.txt
  --clump-field P
  --clump-kb 200
  --clump-p1 1
  --clump-r2 0.1
  --clump-snp-field SNP
  --extract SampleData1/Fold_0/train_data.prune.in
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 252524 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
252524 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--clump: 172878 clumps formed from 252524 top variants.
Results written to SampleData1/Fold_0/train_data.clumped .
Warning: 'rs3134762' is missing from the main dataset, and is a top variant.
Warning: 'rs3132505' is missing from the main dataset, and is a top variant.
Warning: 'rs3130424' is missing from the main dataset, and is a top variant.
247090 more top variant IDs missing; see log file.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.QC.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/train_data.QC.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/train_data.QC.clumped.pruned.bed +
SampleData1/Fold_0/train_data.QC.clumped.pruned.bim +
SampleData1/Fold_0/train_data.QC.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 2 variants from chromosome 1, leaving 14011.
Pruned 2 variants from chromosome 2, leaving 13811.
Pruned 2 variants from chromosome 3, leaving 11783.
Pruned 0 variants from chromosome 4, leaving 11041.
Pruned 1 variant from chromosome 5, leaving 10631.
Pruned 50 variants from chromosome 6, leaving 10018.
Pruned 0 variants from chromosome 7, leaving 9496.
Pruned 4 variants from chromosome 8, leaving 8863.
Pruned 0 variants from chromosome 9, leaving 7768.
Pruned 5 variants from chromosome 10, leaving 8819.
Pruned 10 variants from chromosome 11, leaving 8410.
Pruned 0 variants from chromosome 12, leaving 8198.
Pruned 0 variants from chromosome 13, leaving 6350.
Pruned 1 variant from chromosome 14, leaving 5741.
Pruned 0 variants from chromosome 15, leaving 5569.
Pruned 2 variants from chromosome 16, leaving 6067.
Pruned 1 variant from chromosome 17, leaving 5722.
Pruned 0 variants from chromosome 18, leaving 5578.
Pruned 0 variants from chromosome 19, leaving 4364.
Pruned 0 variants from chromosome 20, leaving 4916.
Pruned 0 variants from chromosome 21, leaving 2811.
Pruned 0 variants from chromosome 22, leaving 2831.
Pruning complete.  80 of 172878 variants removed.
Marker lists written to
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.in and
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/test_data.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
551892 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/test_data.clumped.pruned.bed +
SampleData1/Fold_0/test_data.clumped.pruned.bim +
SampleData1/Fold_0/test_data.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 1829 variants from chromosome 1, leaving 12184.
Pruned 1861 variants from chromosome 2, leaving 11952.
Pruned 1567 variants from chromosome 3, leaving 10218.
Pruned 1415 variants from chromosome 4, leaving 9626.
Pruned 1347 variants from chromosome 5, leaving 9285.
Pruned 1291 variants from chromosome 6, leaving 8777.
Pruned 1238 variants from chromosome 7, leaving 8258.
Pruned 1144 variants from chromosome 8, leaving 7723.
Pruned 902 variants from chromosome 9, leaving 6866.
Pruned 1090 variants from chromosome 10, leaving 7734.
Pruned 1036 variants from chromosome 11, leaving 7384.
Pruned 1061 variants from chromosome 12, leaving 7137.
Pruned 771 variants from chromosome 13, leaving 5579.
Pruned 683 variants from chromosome 14, leaving 5059.
Pruned 603 variants from chromosome 15, leaving 4966.
Pruned 710 variants from chromosome 16, leaving 5359.
Pruned 605 variants from chromosome 17, leaving 5118.
Pruned 648 variants from chromosome 18, leaving 4930.
Pruned 384 variants from chromosome 19, leaving 3980.
Pruned 559 variants from chromosome 20, leaving 4357.
Pruned 297 variants from chromosome 21, leaving 2514.
Pruned 276 variants from chromosome 22, leaving 2555.
Pruning complete.  21317 of 172878 variants removed.
Marker lists written to SampleData1/Fold_0/test_data.clumped.pruned.prune.in
and SampleData1/Fold_0/test_data.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/test_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/test_data.eigenval and
SampleData1/Fold_0/test_data.eigenvec .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/train_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/train_data.eigenval and
SampleData1/Fold_0/train_data.eigenvec .
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Calculating Relatedness Matrix ... 
================================================== 100%
**** INFO: Done.
./gemma --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned -beta SampleData1/gemma.txt -k output/SampleData1/Fold_0/gemma-lmm/gemma.cXX.txt -lmm 1 -o SampleData1/Fold_0/gemma-lmm/gemma
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Start Eigen-Decomposition...
pve estimate =0.999937
se(pve) =0.00519386
================================================== 100%
**** INFO: Done.
   chr           rs      ps  n_miss allele1 allele0     af      beta  \
0    1   rs79373928  801536       0       G       T  0.014 -0.023587   
1    1    rs4970382  840753       0       C       T  0.407 -0.066634   
2    1   rs13303222  849998       0       A       G  0.196 -0.012971   
3    1   rs72631889  851390       0       T       G  0.034  0.310745   
4    1  rs192998324  862772       0       G       A  0.028 -0.253905   

         se   logl_H1   l_remle    p_wald  
0  0.287785 -513.1913  100000.0  0.934721  
1  0.066285 -512.6878  100000.0  0.315413  
2  0.086208 -513.1769  100000.0  0.880479  
3  0.192044 -511.8818  100000.0  0.106477  
4  0.212404 -512.4738  100000.0  0.232684  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/test_data.*.profile.
Continous Phenotype!
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --indep-pairwise 200 50 0.25
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
491952 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Pruned 18860 variants from chromosome 1, leaving 20363.
Pruned 19645 variants from chromosome 2, leaving 20067.
Pruned 16414 variants from chromosome 3, leaving 17080.
Pruned 15404 variants from chromosome 4, leaving 16035.
Pruned 14196 variants from chromosome 5, leaving 15379.
Pruned 19368 variants from chromosome 6, leaving 14770.
Pruned 13110 variants from chromosome 7, leaving 13997.
Pruned 12431 variants from chromosome 8, leaving 12966.
Pruned 9982 variants from chromosome 9, leaving 11477.
Pruned 11999 variants from chromosome 10, leaving 12850.
Pruned 12156 variants from chromosome 11, leaving 12221.
Pruned 10979 variants from chromosome 12, leaving 12050.
Pruned 7923 variants from chromosome 13, leaving 9247.
Pruned 7624 variants from chromosome 14, leaving 8448.
Pruned 7387 variants from chromosome 15, leaving 8145.
Pruned 8063 variants from chromosome 16, leaving 8955.
Pruned 7483 variants from chromosome 17, leaving 8361.
Pruned 6767 variants from chromosome 18, leaving 8240.
Pruned 6438 variants from chromosome 19, leaving 6432.
Pruned 5972 variants from chromosome 20, leaving 7202.
Pruned 3426 variants from chromosome 21, leaving 4102.
Pruned 3801 variants from chromosome 22, leaving 4137.
Pruning complete.  239428 of 491952 variants removed.
Marker lists written to SampleData1/Fold_0/train_data.prune.in and
SampleData1/Fold_0/train_data.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --clump SampleData1/SampleData1.txt
  --clump-field P
  --clump-kb 200
  --clump-p1 1
  --clump-r2 0.1
  --clump-snp-field SNP
  --extract SampleData1/Fold_0/train_data.prune.in
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 252524 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899%
Warning: 'rs3134762' is missing from the main dataset, and is a top variant.
Warning: 'rs3132505' is missing from the main dataset, and is a top variant.
Warning: 'rs3130424' is missing from the main dataset, and is a top variant.
247090 more top variant IDs missing; see log file.
 done.
Total genotyping rate is 0.999894.
252524 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--clump: 172878 clumps formed from 252524 top variants.
Results written to SampleData1/Fold_0/train_data.clumped .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.QC.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/train_data.QC.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/train_data.QC.clumped.pruned.bed +
SampleData1/Fold_0/train_data.QC.clumped.pruned.bim +
SampleData1/Fold_0/train_data.QC.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 2 variants from chromosome 1, leaving 14011.
Pruned 2 variants from chromosome 2, leaving 13811.
Pruned 2 variants from chromosome 3, leaving 11783.
Pruned 0 variants from chromosome 4, leaving 11041.
Pruned 1 variant from chromosome 5, leaving 10631.
Pruned 50 variants from chromosome 6, leaving 10018.
Pruned 0 variants from chromosome 7, leaving 9496.
Pruned 4 variants from chromosome 8, leaving 8863.
Pruned 0 variants from chromosome 9, leaving 7768.
Pruned 5 variants from chromosome 10, leaving 8819.
Pruned 10 variants from chromosome 11, leaving 8410.
Pruned 0 variants from chromosome 12, leaving 8198.
Pruned 0 variants from chromosome 13, leaving 6350.
Pruned 1 variant from chromosome 14, leaving 5741.
Pruned 0 variants from chromosome 15, leaving 5569.
Pruned 2 variants from chromosome 16, leaving 6067.
Pruned 1 variant from chromosome 17, leaving 5722.
Pruned 0 variants from chromosome 18, leaving 5578.
Pruned 0 variants from chromosome 19, leaving 4364.
Pruned 0 variants from chromosome 20, leaving 4916.
Pruned 0 variants from chromosome 21, leaving 2811.
Pruned 0 variants from chromosome 22, leaving 2831.
Pruning complete.  80 of 172878 variants removed.
Marker lists written to
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.in and
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/test_data.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
551892 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/test_data.clumped.pruned.bed +
SampleData1/Fold_0/test_data.clumped.pruned.bim +
SampleData1/Fold_0/test_data.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 1829 variants from chromosome 1, leaving 12184.
Pruned 1861 variants from chromosome 2, leaving 11952.
Pruned 1567 variants from chromosome 3, leaving 10218.
Pruned 1415 variants from chromosome 4, leaving 9626.
Pruned 1347 variants from chromosome 5, leaving 9285.
Pruned 1291 variants from chromosome 6, leaving 8777.
Pruned 1238 variants from chromosome 7, leaving 8258.
Pruned 1144 variants from chromosome 8, leaving 7723.
Pruned 902 variants from chromosome 9, leaving 6866.
Pruned 1090 variants from chromosome 10, leaving 7734.
Pruned 1036 variants from chromosome 11, leaving 7384.
Pruned 1061 variants from chromosome 12, leaving 7137.
Pruned 771 variants from chromosome 13, leaving 5579.
Pruned 683 variants from chromosome 14, leaving 5059.
Pruned 603 variants from chromosome 15, leaving 4966.
Pruned 710 variants from chromosome 16, leaving 5359.
Pruned 605 variants from chromosome 17, leaving 5118.
Pruned 648 variants from chromosome 18, leaving 4930.
Pruned 384 variants from chromosome 19, leaving 3980.
Pruned 559 variants from chromosome 20, leaving 4357.
Pruned 297 variants from chromosome 21, leaving 2514.
Pruned 276 variants from chromosome 22, leaving 2555.
Pruning complete.  21317 of 172878 variants removed.
Marker lists written to SampleData1/Fold_0/test_data.clumped.pruned.prune.in
and SampleData1/Fold_0/test_data.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/test_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/test_data.eigenval and
SampleData1/Fold_0/test_data.eigenvec .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/train_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
31920 markers complete.
31980 markers complete.
32040 markers complete.
32100 markers complete.
32160 markers complete.
32220 markers complete.
32280 markers complete.
32340 markers complete.
32400 markers complete.
32460 markers complete.
32520 markers complete.
32580 markers complete.
32640 markers complete.
32700 markers complete.
32760 markers complete.
32820 markers complete.
32880 markers complete.
32940 markers complete.
33000 markers complete.
33060 markers complete.
33120 markers complete.
33180 markers complete.
33240 markers complete.
33300 markers complete.
33360 markers complete.
33420 markers complete.
33480 markers complete.
33540 markers complete.
33600 markers complete.
33660 markers complete.
33720 markers complete.
33780 markers complete.
33840 markers complete.
33900 markers complete.
33960 markers complete.
34020 markers complete.
34080 markers complete.
34140 markers complete.
34200 markers complete.
34260 markers complete.
34320 markers complete.
34380 markers complete.
34440 markers complete.
34500 markers complete.
34560 markers complete.
34620 markers complete.
34680 markers complete.
34740 markers complete.
34800 markers complete.
34860 markers complete.
34920 markers complete.
34980 markers complete.
35040 markers complete.
35100 markers complete.
35160 markers complete.
35220 markers complete.
35280 markers complete.
35340 markers complete.
35400 markers complete.
35460 markers complete.
35520 markers complete.
35580 markers complete.
35640 markers complete.
35700 markers complete.
35760 markers complete.
35820 markers complete.
35880 markers complete.
35940 markers complete.
36000 markers complete.
36060 markers complete.
36120 markers complete.
36180 markers complete.
36240 markers complete.
36300 markers complete.
36360 markers complete.
36420 markers complete.
36480 markers complete.
36540 markers complete.
36600 markers complete.
36660 markers complete.
36720 markers complete.
36780 markers complete.
36840 markers complete.
36900 markers complete.
36960 markers complete.
37020 markers complete.
37080 markers complete.
37140 markers complete.
37200 markers complete.
37260 markers complete.
37320 markers complete.
37380 markers complete.
37440 markers complete.
37500 markers complete.
37560 markers complete.
37620 markers complete.
37680 markers complete.
37740 markers complete.
37800 markers complete.
37860 markers complete.
37920 markers complete.
37980 markers complete.
38040 markers complete.
38100 markers complete.
38160 markers complete.
38220 markers complete.
38280 markers complete.
38340 markers complete.
38400 markers complete.
38460 markers complete.
38520 markers complete.
38580 markers complete.
38640 markers complete.
38700 markers complete.
38760 markers complete.
38820 markers complete.
38880 markers complete.
38940 markers complete.
39000 markers complete.
39060 markers complete.
39120 markers complete.
39180 markers complete.
39240 markers complete.
39300 markers complete.
39360 markers complete.
39420 markers complete.
39480 markers complete.
39540 markers complete.
39600 markers complete.
39660 markers complete.
39720 markers complete.
39780 markers complete.
39840 markers complete.
39900 markers complete.
39960 markers complete.
40020 markers complete.
40080 markers complete.
40140 markers complete.
40200 markers complete.
40260 markers complete.
40320 markers complete.
40380 markers complete.
40440 markers complete.
40500 markers complete.
40560 markers complete.
40620 markers complete.
40680 markers complete.
40740 markers complete.
40800 markers complete.
40860 markers complete.
40920 markers complete.
40980 markers complete.
41040 markers complete.
41100 markers complete.
41160 markers complete.
41220 markers complete.
41280 markers complete.
41340 markers complete.
41400 markers complete.
41460 markers complete.
41520 markers complete.
41580 markers complete.
41640 markers complete.
41700 markers complete.
41760 markers complete.
41820 markers complete.
41880 markers complete.
41940 markers complete.
42000 markers complete.
42060 markers complete.
42120 markers complete.
42180 markers complete.
42240 markers complete.
42300 markers complete.
42360 markers complete.
42420 markers complete.
42480 markers complete.
42540 markers complete.
42600 markers complete.
42660 markers complete.
42720 markers complete.
42780 markers complete.
42840 markers complete.
42900 markers complete.
42960 markers complete.
43020 markers complete.
43080 markers complete.
43140 markers complete.
43200 markers complete.
43260 markers complete.
43320 markers complete.
43380 markers complete.
43440 markers complete.
43500 markers complete.
43560 markers complete.
43620 markers complete.
43680 markers complete.
43740 markers complete.
43800 markers complete.
43860 markers complete.
43920 markers complete.
43980 markers complete.
44040 markers complete.
44100 markers complete.
44160 markers complete.
44220 markers complete.
44280 markers complete.
44340 markers complete.
44400 markers complete.
44460 markers complete.
44520 markers complete.
44580 markers complete.
44640 markers complete.
44700 markers complete.
44760 markers complete.
44820 markers complete.
44880 markers complete.
44940 markers complete.
45000 markers complete.
45060 markers complete.
45120 markers complete.
45180 markers complete.
45240 markers complete.
45300 markers complete.
45360 markers complete.
45420 markers complete.
45480 markers complete.
45540 markers complete.
45600 markers complete.
45660 markers complete.
45720 markers complete.
45780 markers complete.
45840 markers complete.
45900 markers complete.
45960 markers complete.
46020 markers complete.
46080 markers complete.
46140 markers complete.
46200 markers complete.
46260 markers complete.
46320 markers complete.
46380 markers complete.
46440 markers complete.
46500 markers complete.
46560 markers complete.
46620 markers complete.
46680 markers complete.
46740 markers complete.
46800 markers complete.
46860 markers complete.
46920 markers complete.
46980 markers complete.
47040 markers complete.
47100 markers complete.
47160 markers complete.
47220 markers complete.
47280 markers complete.
47340 markers complete.
47400 markers complete.
47460 markers complete.
47520 markers complete.
47580 markers complete.
47640 markers complete.
47700 markers complete.
47760 markers complete.
47820 markers complete.
47880 markers complete.
47940 markers complete.
48000 markers complete.
48060 markers complete.
48120 markers complete.
48180 markers complete.
48240 markers complete.
48300 markers complete.
48360 markers complete.
48420 markers complete.
48480 markers complete.
48540 markers complete.
48600 markers complete.
48660 markers complete.
48720 markers complete.
48780 markers complete.
48840 markers complete.
48900 markers complete.
48960 markers complete.
49020 markers complete.
49080 markers complete.
49140 markers complete.
49200 markers complete.
49260 markers complete.
49320 markers complete.
49380 markers complete.
49440 markers complete.
49500 markers complete.
49560 markers complete.
49620 markers complete.
49680 markers complete.
49740 markers complete.
49800 markers complete.
49860 markers complete.
49920 markers complete.
49980 markers complete.
50040 markers complete.
50100 markers complete.
50160 markers complete.
50220 markers complete.
50280 markers complete.
50340 markers complete.
50400 markers complete.
50460 markers complete.
50520 markers complete.
50580 markers complete.
50640 markers complete.
50700 markers complete.
50760 markers complete.
50820 markers complete.
50880 markers complete.
50940 markers complete.
51000 markers complete.
51060 markers complete.
51120 markers complete.
51180 markers complete.
51240 markers complete.
51300 markers complete.
51360 markers complete.
51420 markers complete.
51480 markers complete.
51540 markers complete.
51600 markers complete.
51660 markers complete.
51720 markers complete.
51780 markers complete.
51840 markers complete.
51900 markers complete.
51960 markers complete.
52020 markers complete.
52080 markers complete.
52140 markers complete.
52200 markers complete.
52260 markers complete.
52320 markers complete.
52380 markers complete.
52440 markers complete.
52500 markers complete.
52560 markers complete.
52620 markers complete.
52680 markers complete.
52740 markers complete.
52800 markers complete.
52860 markers complete.
52920 markers complete.
52980 markers complete.
53040 markers complete.
53100 markers complete.
53160 markers complete.
53220 markers complete.
53280 markers complete.
53340 markers complete.
53400 markers complete.
53460 markers complete.
53520 markers complete.
53580 markers complete.
53640 markers complete.
53700 markers complete.
53760 markers complete.
53820 markers complete.
53880 markers complete.
53940 markers complete.
54000 markers complete.
54060 markers complete.
54120 markers complete.
54180 markers complete.
54240 markers complete.
54300 markers complete.
54360 markers complete.
54420 markers complete.
54480 markers complete.
54540 markers complete.
54600 markers complete.
54660 markers complete.
54720 markers complete.
54780 markers complete.
54840 markers complete.
54900 markers complete.
54960 markers complete.
55020 markers complete.
55080 markers complete.
55140 markers complete.
55200 markers complete.
55260 markers complete.
55320 markers complete.
55380 markers complete.
55440 markers complete.
55500 markers complete.
55560 markers complete.
55620 markers complete.
55680 markers complete.
55740 markers complete.
55800 markers complete.
55860 markers complete.
55920 markers complete.
55980 markers complete.
56040 markers complete.
56100 markers complete.
56160 markers complete.
56220 markers complete.
56280 markers complete.
56340 markers complete.
56400 markers complete.
56460 markers complete.
56520 markers complete.
56580 markers complete.
56640 markers complete.
56700 markers complete.
56760 markers complete.
56820 markers complete.
56880 markers complete.
56940 markers complete.
57000 markers complete.
57060 markers complete.
57120 markers complete.
57180 markers complete.
57240 markers complete.
57300 markers complete.
57360 markers complete.
57420 markers complete.
57480 markers complete.
57540 markers complete.
57600 markers complete.
57660 markers complete.
57720 markers complete.
57780 markers complete.
57840 markers complete.
57900 markers complete.
57960 markers complete.
58020 markers complete.
58080 markers complete.
58140 markers complete.
58200 markers complete.
58260 markers complete.
58320 markers complete.
58380 markers complete.
58440 markers complete.
58500 markers complete.
58560 markers complete.
58620 markers complete.
58680 markers complete.
58740 markers complete.
58800 markers complete.
58860 markers complete.
58920 markers complete.
58980 markers complete.
59040 markers complete.
59100 markers complete.
59160 markers complete.
59220 markers complete.
59280 markers complete.
59340 markers complete.
59400 markers complete.
59460 markers complete.
59520 markers complete.
59580 markers complete.
59640 markers complete.
59700 markers complete.
59760 markers complete.
59820 markers complete.
59880 markers complete.
59940 markers complete.
60000 markers complete.
60060 markers complete.
60120 markers complete.
60180 markers complete.
60240 markers complete.
60300 markers complete.
60360 markers complete.
60420 markers complete.
60480 markers complete.
60540 markers complete.
60600 markers complete.
60660 markers complete.
60720 markers complete.
60780 markers complete.
60840 markers complete.
60900 markers complete.
60960 markers complete.
61020 markers complete.
61080 markers complete.
61140 markers complete.
61200 markers complete.
61260 markers complete.
61320 markers complete.
61380 markers complete.
61440 markers complete.
61500 markers complete.
61560 markers complete.
61620 markers complete.
61680 markers complete.
61740 markers complete.
61800 markers complete.
61860 markers complete.
61920 markers complete.
61980 markers complete.
62040 markers complete.
62100 markers complete.
62160 markers complete.
62220 markers complete.
62280 markers complete.
62340 markers complete.
62400 markers complete.
62460 markers complete.
62520 markers complete.
62580 markers complete.
62640 markers complete.
62700 markers complete.
62760 markers complete.
62820 markers complete.
62880 markers complete.
62940 markers complete.
63000 markers complete.
63060 markers complete.
63120 markers complete.
63180 markers complete.
63240 markers complete.
63300 markers complete.
63360 markers complete.
63420 markers complete.
63480 markers complete.
63540 markers complete.
63600 markers complete.
63660 markers complete.
63720 markers complete.
63780 markers complete.
63840 markers complete.
63900 markers complete.
63960 markers complete.
64020 markers complete.
64080 markers complete.
64140 markers complete.
64200 markers complete.
64260 markers complete.
64320 markers complete.
64380 markers complete.
64440 markers complete.
64500 markers complete.
64560 markers complete.
64620 markers complete.
64680 markers complete.
64740 markers complete.
64800 markers complete.
64860 markers complete.
64920 markers complete.
64980 markers complete.
65040 markers complete.
65100 markers complete.
65160 markers complete.
65220 markers complete.
65280 markers complete.
65340 markers complete.
65400 markers complete.
65460 markers complete.
65520 markers complete.
65580 markers complete.
65640 markers complete.
65700 markers complete.
65760 markers complete.
65820 markers complete.
65880 markers complete.
65940 markers complete.
66000 markers complete.
66060 markers complete.
66120 markers complete.
66180 markers complete.
66240 markers complete.
66300 markers complete.
66360 markers complete.
66420 markers complete.
66480 markers complete.
66540 markers complete.
66600 markers complete.
66660 markers complete.
66720 markers complete.
66780 markers complete.
66840 markers complete.
66900 markers complete.
66960 markers complete.
67020 markers complete.
67080 markers complete.
67140 markers complete.
67200 markers complete.
67260 markers complete.
67320 markers complete.
67380 markers complete.
67440 markers complete.
67500 markers complete.
67560 markers complete.
67620 markers complete.
67680 markers complete.
67740 markers complete.
67800 markers complete.
67860 markers complete.
67920 markers complete.
67980 markers complete.
68040 markers complete.
68100 markers complete.
68160 markers complete.
68220 markers complete.
68280 markers complete.
68340 markers complete.
68400 markers complete.
68460 markers complete.
68520 markers complete.
68580 markers complete.
68640 markers complete.
68700 markers complete.
68760 markers complete.
68820 markers complete.
68880 markers complete.
68940 markers complete.
69000 markers complete.
69060 markers complete.
69120 markers complete.
69180 markers complete.
69240 markers complete.
69300 markers complete.
69360 markers complete.
69420 markers complete.
69480 markers complete.
69540 markers complete.
69600 markers complete.
69660 markers complete.
69720 markers complete.
69780 markers complete.
69840 markers complete.
69900 markers complete.
69960 markers complete.
70020 markers complete.
70080 markers complete.
70140 markers complete.
70200 markers complete.
70260 markers complete.
70320 markers complete.
70380 markers complete.
70440 markers complete.
70500 markers complete.
70560 markers complete.
70620 markers complete.
70680 markers complete.
70740 markers complete.
70800 markers complete.
70860 markers complete.
70920 markers complete.
70980 markers complete.
71040 markers complete.
71100 markers complete.
71160 markers complete.
71220 markers complete.
71280 markers complete.
71340 markers complete.
71400 markers complete.
71460 markers complete.
71520 markers complete.
71580 markers complete.
71640 markers complete.
71700 markers complete.
71760 markers complete.
71820 markers complete.
71880 markers complete.
71940 markers complete.
72000 markers complete.
72060 markers complete.
72120 markers complete.
72180 markers complete.
72240 markers complete.
72300 markers complete.
72360 markers complete.
72420 markers complete.
72480 markers complete.
72540 markers complete.
72600 markers complete.
72660 markers complete.
72720 markers complete.
72780 markers complete.
72840 markers complete.
72900 markers complete.
72960 markers complete.
73020 markers complete.
73080 markers complete.
73140 markers complete.
73200 markers complete.
73260 markers complete.
73320 markers complete.
73380 markers complete.
73440 markers complete.
73500 markers complete.
73560 markers complete.
73620 markers complete.
73680 markers complete.
73740 markers complete.
73800 markers complete.
73860 markers complete.
73920 markers complete.
73980 markers complete.
74040 markers complete.
74100 markers complete.
74160 markers complete.
74220 markers complete.
74280 markers complete.
74340 markers complete.
74400 markers complete.
74460 markers complete.
74520 markers complete.
74580 markers complete.
74640 markers complete.
74700 markers complete.
74760 markers complete.
74820 markers complete.
74880 markers complete.
74940 markers complete.
75000 markers complete.
75060 markers complete.
75120 markers complete.
75180 markers complete.
75240 markers complete.
75300 markers complete.
75360 markers complete.
75420 markers complete.
75480 markers complete.
75540 markers complete.
75600 markers complete.
75660 markers complete.
75720 markers complete.
75780 markers complete.
75840 markers complete.
75900 markers complete.
75960 markers complete.
76020 markers complete.
76080 markers complete.
76140 markers complete.
76200 markers complete.
76260 markers complete.
76320 markers complete.
76380 markers complete.
76440 markers complete.
76500 markers complete.
76560 markers complete.
76620 markers complete.
76680 markers complete.
76740 markers complete.
76800 markers complete.
76860 markers complete.
76920 markers complete.
76980 markers complete.
77040 markers complete.
77100 markers complete.
77160 markers complete.
77220 markers complete.
77280 markers complete.
77340 markers complete.
77400 markers complete.
77460 markers complete.
77520 markers complete.
77580 markers complete.
77640 markers complete.
77700 markers complete.
77760 markers complete.
77820 markers complete.
77880 markers complete.
77940 markers complete.
78000 markers complete.
78060 markers complete.
78120 markers complete.
78180 markers complete.
78240 markers complete.
78300 markers complete.
78360 markers complete.
78420 markers complete.
78480 markers complete.
78540 markers complete.
78600 markers complete.
78660 markers complete.
78720 markers complete.
78780 markers complete.
78840 markers complete.
78900 markers complete.
78960 markers complete.
79020 markers complete.
79080 markers complete.
79140 markers complete.
79200 markers complete.
79260 markers complete.
79320 markers complete.
79380 markers complete.
79440 markers complete.
79500 markers complete.
79560 markers complete.
79620 markers complete.
79680 markers complete.
79740 markers complete.
79800 markers complete.
79860 markers complete.
79920 markers complete.
79980 markers complete.
80040 markers complete.
80100 markers complete.
80160 markers complete.
80220 markers complete.
80280 markers complete.
80340 markers complete.
80400 markers complete.
80460 markers complete.
80520 markers complete.
80580 markers complete.
80640 markers complete.
80700 markers complete.
80760 markers complete.
80820 markers complete.
80880 markers complete.
80940 markers complete.
81000 markers complete.
81060 markers complete.
81120 markers complete.
81180 markers complete.
81240 markers complete.
81300 markers complete.
81360 markers complete.
81420 markers complete.
81480 markers complete.
81540 markers complete.
81600 markers complete.
81660 markers complete.
81720 markers complete.
81780 markers complete.
81840 markers complete.
81900 markers complete.
81960 markers complete.
82020 markers complete.
82080 markers complete.
82140 markers complete.
82200 markers complete.
82260 markers complete.
82320 markers complete.
82380 markers complete.
82440 markers complete.
82500 markers complete.
82560 markers complete.
82620 markers complete.
82680 markers complete.
82740 markers complete.
82800 markers complete.
82860 markers complete.
82920 markers complete.
82980 markers complete.
83040 markers complete.
83100 markers complete.
83160 markers complete.
83220 markers complete.
83280 markers complete.
83340 markers complete.
83400 markers complete.
83460 markers complete.
83520 markers complete.
83580 markers complete.
83640 markers complete.
83700 markers complete.
83760 markers complete.
83820 markers complete.
83880 markers complete.
83940 markers complete.
84000 markers complete.
84060 markers complete.
84120 markers complete.
84180 markers complete.
84240 markers complete.
84300 markers complete.
84360 markers complete.
84420 markers complete.
84480 markers complete.
84540 markers complete.
84600 markers complete.
84660 markers complete.
84720 markers complete.
84780 markers complete.
84840 markers complete.
84900 markers complete.
84960 markers complete.
85020 markers complete.
85080 markers complete.
85140 markers complete.
85200 markers complete.
85260 markers complete.
85320 markers complete.
85380 markers complete.
85440 markers complete.
85500 markers complete.
85560 markers complete.
85620 markers complete.
85680 markers complete.
85740 markers complete.
85800 markers complete.
85860 markers complete.
85920 markers complete.
85980 markers complete.
86040 markers complete.
86100 markers complete.
86160 markers complete.
86220 markers complete.
86280 markers complete.
86340 markers complete.
86400 markers complete.
86460 markers complete.
86520 markers complete.
86580 markers complete.
86640 markers complete.
86700 markers complete.
86760 markers complete.
86820 markers complete.
86880 markers complete.
86940 markers complete.
87000 markers complete.
87060 markers complete.
87120 markers complete.
87180 markers complete.
87240 markers complete.
87300 markers complete.
87360 markers complete.
87420 markers complete.
87480 markers complete.
87540 markers complete.
87600 markers complete.
87660 markers complete.
87720 markers complete.
87780 markers complete.
87840 markers complete.
87900 markers complete.
87960 markers complete.
88020 markers complete.
88080 markers complete.
88140 markers complete.
88200 markers complete.
88260 markers complete.
88320 markers complete.
88380 markers complete.
88440 markers complete.
88500 markers complete.
88560 markers complete.
88620 markers complete.
88680 markers complete.
88740 markers complete.
88800 markers complete.
88860 markers complete.
88920 markers complete.
88980 markers complete.
89040 markers complete.
89100 markers complete.
89160 markers complete.
89220 markers complete.
89280 markers complete.
89340 markers complete.
89400 markers complete.
89460 markers complete.
89520 markers complete.
89580 markers complete.
89640 markers complete.
89700 markers complete.
89760 markers complete.
89820 markers complete.
89880 markers complete.
89940 markers complete.
90000 markers complete.
90060 markers complete.
90120 markers complete.
90180 markers complete.
90240 markers complete.
90300 markers complete.
90360 markers complete.
90420 markers complete.
90480 markers complete.
90540 markers complete.
90600 markers complete.
90660 markers complete.
90720 markers complete.
90780 markers complete.
90840 markers complete.
90900 markers complete.
90960 markers complete.
91020 markers complete.
91080 markers complete.
91140 markers complete.
91200 markers complete.
91260 markers complete.
91320 markers complete.
91380 markers complete.
91440 markers complete.
91500 markers complete.
91560 markers complete.
91620 markers complete.
91680 markers complete.
91740 markers complete.
91800 markers complete.
91860 markers complete.
91920 markers complete.
91980 markers complete.
92040 markers complete.
92100 markers complete.
92160 markers complete.
92220 markers complete.
92280 markers complete.
92340 markers complete.
92400 markers complete.
92460 markers complete.
92520 markers complete.
92580 markers complete.
92640 markers complete.
92700 markers complete.
92760 markers complete.
92820 markers complete.
92880 markers complete.
92940 markers complete.
93000 markers complete.
93060 markers complete.
93120 markers complete.
93180 markers complete.
93240 markers complete.
93300 markers complete.
93360 markers complete.
93420 markers complete.
93480 markers complete.
93540 markers complete.
93600 markers complete.
93660 markers complete.
93720 markers complete.
93780 markers complete.
93840 markers complete.
93900 markers complete.
93960 markers complete.
94020 markers complete.
94080 markers complete.
94140 markers complete.
94200 markers complete.
94260 markers complete.
94320 markers complete.
94380 markers complete.
94440 markers complete.
94500 markers complete.
94560 markers complete.
94620 markers complete.
94680 markers complete.
94740 markers complete.
94800 markers complete.
94860 markers complete.
94920 markers complete.
94980 markers complete.
95040 markers complete.
95100 markers complete.
95160 markers complete.
95220 markers complete.
95280 markers complete.
95340 markers complete.
95400 markers complete.
95460 markers complete.
95520 markers complete.
95580 markers complete.
95640 markers complete.
95700 markers complete.
95760 markers complete.
95820 markers complete.
95880 markers complete.
95940 markers complete.
96000 markers complete.
96060 markers complete.
96120 markers complete.
96180 markers complete.
96240 markers complete.
96300 markers complete.
96360 markers complete.
96420 markers complete.
96480 markers complete.
96540 markers complete.
96600 markers complete.
96660 markers complete.
96720 markers complete.
96780 markers complete.
96840 markers complete.
96900 markers complete.
96960 markers complete.
97020 markers complete.
97080 markers complete.
97140 markers complete.
97200 markers complete.
97260 markers complete.
97320 markers complete.
97380 markers complete.
97440 markers complete.
97500 markers complete.
97560 markers complete.
97620 markers complete.
97680 markers complete.
97740 markers complete.
97800 markers complete.
97860 markers complete.
97920 markers complete.
97980 markers complete.
98040 markers complete.
98100 markers complete.
98160 markers complete.
98220 markers complete.
98280 markers complete.
98340 markers complete.
98400 markers complete.
98460 markers complete.
98520 markers complete.
98580 markers complete.
98640 markers complete.
98700 markers complete.
98760 markers complete.
98820 markers complete.
98880 markers complete.
98940 markers complete.
99000 markers complete.
99060 markers complete.
99120 markers complete.
99180 markers complete.
99240 markers complete.
99300 markers complete.
99360 markers complete.
99420 markers complete.
99480 markers complete.
99540 markers complete.
99600 markers complete.
99660 markers complete.
99720 markers complete.
99780 markers complete.
99840 markers complete.
99900 markers complete.
99960 markers complete.
100020 markers complete.
100080 markers complete.
100140 markers complete.
100200 markers complete.
100260 markers complete.
100320 markers complete.
100380 markers complete.
100440 markers complete.
100500 markers complete.
100560 markers complete.
100620 markers complete.
100680 markers complete.
100740 markers complete.
100800 markers complete.
100860 markers complete.
100920 markers complete.
100980 markers complete.
101040 markers complete.
101100 markers complete.
101160 markers complete.
101220 markers complete.
101280 markers complete.
101340 markers complete.
101400 markers complete.
101460 markers complete.
101520 markers complete.
101580 markers complete.
101640 markers complete.
101700 markers complete.
101760 markers complete.
101820 markers complete.
101880 markers complete.
101940 markers complete.
102000 markers complete.
102060 markers complete.
102120 markers complete.
102180 markers complete.
102240 markers complete.
102300 markers complete.
102360 markers complete.
102420 markers complete.
102480 markers complete.
102540 markers complete.
102600 markers complete.
102660 markers complete.
102720 markers complete.
102780 markers complete.
102840 markers complete.
102900 markers complete.
102960 markers complete.
103020 markers complete.
103080 markers complete.
103140 markers complete.
103200 markers complete.
103260 markers complete.
103320 markers complete.
103380 markers complete.
103440 markers complete.
103500 markers complete.
103560 markers complete.
103620 markers complete.
103680 markers complete.
103740 markers complete.
103800 markers complete.
103860 markers complete.
103920 markers complete.
103980 markers complete.
104040 markers complete.
104100 markers complete.
104160 markers complete.
104220 markers complete.
104280 markers complete.
104340 markers complete.
104400 markers complete.
104460 markers complete.
104520 markers complete.
104580 markers complete.
104640 markers complete.
104700 markers complete.
104760 markers complete.
104820 markers complete.
104880 markers complete.
104940 markers complete.
105000 markers complete.
105060 markers complete.
105120 markers complete.
105180 markers complete.
105240 markers complete.
105300 markers complete.
105360 markers complete.
105420 markers complete.
105480 markers complete.
105540 markers complete.
105600 markers complete.
105660 markers complete.
105720 markers complete.
105780 markers complete.
105840 markers complete.
105900 markers complete.
105960 markers complete.
106020 markers complete.
106080 markers complete.
106140 markers complete.
106200 markers complete.
106260 markers complete.
106320 markers complete.
106380 markers complete.
106440 markers complete.
106500 markers complete.
106560 markers complete.
106620 markers complete.
106680 markers complete.
106740 markers complete.
106800 markers complete.
106860 markers complete.
106920 markers complete.
106980 markers complete.
107040 markers complete.
107100 markers complete.
107160 markers complete.
107220 markers complete.
107280 markers complete.
107340 markers complete.
107400 markers complete.
107460 markers complete.
107520 markers complete.
107580 markers complete.
107640 markers complete.
107700 markers complete.
107760 markers complete.
107820 markers complete.
107880 markers complete.
107940 markers complete.
108000 markers complete.
108060 markers complete.
108120 markers complete.
108180 markers complete.
108240 markers complete.
108300 markers complete.
108360 markers complete.
108420 markers complete.
108480 markers complete.
108540 markers complete.
108600 markers complete.
108660 markers complete.
108720 markers complete.
108780 markers complete.
108840 markers complete.
108900 markers complete.
108960 markers complete.
109020 markers complete.
109080 markers complete.
109140 markers complete.
109200 markers complete.
109260 markers complete.
109320 markers complete.
109380 markers complete.
109440 markers complete.
109500 markers complete.
109560 markers complete.
109620 markers complete.
109680 markers complete.
109740 markers complete.
109800 markers complete.
109860 markers complete.
109920 markers complete.
109980 markers complete.
110040 markers complete.
110100 markers complete.
110160 markers complete.
110220 markers complete.
110280 markers complete.
110340 markers complete.
110400 markers complete.
110460 markers complete.
110520 markers complete.
110580 markers complete.
110640 markers complete.
110700 markers complete.
110760 markers complete.
110820 markers complete.
110880 markers complete.
110940 markers complete.
111000 markers complete.
111060 markers complete.
111120 markers complete.
111180 markers complete.
111240 markers complete.
111300 markers complete.
111360 markers complete.
111420 markers complete.
111480 markers complete.
111540 markers complete.
111600 markers complete.
111660 markers complete.
111720 markers complete.
111780 markers complete.
111840 markers complete.
111900 markers complete.
111960 markers complete.
112020 markers complete.
112080 markers complete.
112140 markers complete.
112200 markers complete.
112260 markers complete.
112320 markers complete.
112380 markers complete.
112440 markers complete.
112500 markers complete.
112560 markers complete.
112620 markers complete.
112680 markers complete.
112740 markers complete.
112800 markers complete.
112860 markers complete.
112920 markers complete.
112980 markers complete.
113040 markers complete.
113100 markers complete.
113160 markers complete.
113220 markers complete.
113280 markers complete.
113340 markers complete.
113400 markers complete.
113460 markers complete.
113520 markers complete.
113580 markers complete.
113640 markers complete.
113700 markers complete.
113760 markers complete.
113820 markers complete.
113880 markers complete.
113940 markers complete.
114000 markers complete.
114060 markers complete.
114120 markers complete.
114180 markers complete.
114240 markers complete.
114300 markers complete.
114360 markers complete.
114420 markers complete.
114480 markers complete.
114540 markers complete.
114600 markers complete.
114660 markers complete.
114720 markers complete.
114780 markers complete.
114840 markers complete.
114900 markers complete.
114960 markers complete.
115020 markers complete.
115080 markers complete.
115140 markers complete.
115200 markers complete.
115260 markers complete.
115320 markers complete.
115380 markers complete.
115440 markers complete.
115500 markers complete.
115560 markers complete.
115620 markers complete.
115680 markers complete.
115740 markers complete.
115800 markers complete.
115860 markers complete.
115920 markers complete.
115980 markers complete.
116040 markers complete.
116100 markers complete.
116160 markers complete.
116220 markers complete.
116280 markers complete.
116340 markers complete.
116400 markers complete.
116460 markers complete.
116520 markers complete.
116580 markers complete.
116640 markers complete.
116700 markers complete.
116760 markers complete.
116820 markers complete.
116880 markers complete.
116940 markers complete.
117000 markers complete.
117060 markers complete.
117120 markers complete.
117180 markers complete.
117240 markers complete.
117300 markers complete.
117360 markers complete.
117420 markers complete.
117480 markers complete.
117540 markers complete.
117600 markers complete.
117660 markers complete.
117720 markers complete.
117780 markers complete.
117840 markers complete.
117900 markers complete.
117960 markers complete.
118020 markers complete.
118080 markers complete.
118140 markers complete.
118200 markers complete.
118260 markers complete.
118320 markers complete.
118380 markers complete.
118440 markers complete.
118500 markers complete.
118560 markers complete.
118620 markers complete.
118680 markers complete.
118740 markers complete.
118800 markers complete.
118860 markers complete.
118920 markers complete.
118980 markers complete.
119040 markers complete.
119100 markers complete.
119160 markers complete.
119220 markers complete.
119280 markers complete.
119340 markers complete.
119400 markers complete.
119460 markers complete.
119520 markers complete.
119580 markers complete.
119640 markers complete.
119700 markers complete.
119760 markers complete.
119820 markers complete.
119880 markers complete.
119940 markers complete.
120000 markers complete.
120060 markers complete.
120120 markers complete.
120180 markers complete.
120240 markers complete.
120300 markers complete.
120360 markers complete.
120420 markers complete.
120480 markers complete.
120540 markers complete.
120600 markers complete.
120660 markers complete.
120720 markers complete.
120780 markers complete.
120840 markers complete.
120900 markers complete.
120960 markers complete.
121020 markers complete.
121080 markers complete.
121140 markers complete.
121200 markers complete.
121260 markers complete.
121320 markers complete.
121380 markers complete.
121440 markers complete.
121500 markers complete.
121560 markers complete.
121620 markers complete.
121680 markers complete.
121740 markers complete.
121800 markers complete.
121860 markers complete.
121920 markers complete.
121980 markers complete.
122040 markers complete.
122100 markers complete.
122160 markers complete.
122220 markers complete.
122280 markers complete.
122340 markers complete.
122400 markers complete.
122460 markers complete.
122520 markers complete.
122580 markers complete.
122640 markers complete.
122700 markers complete.
122760 markers complete.
122820 markers complete.
122880 markers complete.
122940 markers complete.
123000 markers complete.
123060 markers complete.
123120 markers complete.
123180 markers complete.
123240 markers complete.
123300 markers complete.
123360 markers complete.
123420 markers complete.
123480 markers complete.
123540 markers complete.
123600 markers complete.
123660 markers complete.
123720 markers complete.
123780 markers complete.
123840 markers complete.
123900 markers complete.
123960 markers complete.
124020 markers complete.
124080 markers complete.
124140 markers complete.
124200 markers complete.
124260 markers complete.
124320 markers complete.
124380 markers complete.
124440 markers complete.
124500 markers complete.
124560 markers complete.
124620 markers complete.
124680 markers complete.
124740 markers complete.
124800 markers complete.
124860 markers complete.
124920 markers complete.
124980 markers complete.
125040 markers complete.
125100 markers complete.
125160 markers complete.
125220 markers complete.
125280 markers complete.
125340 markers complete.
125400 markers complete.
125460 markers complete.
125520 markers complete.
125580 markers complete.
125640 markers complete.
125700 markers complete.
125760 markers complete.
125820 markers complete.
125880 markers complete.
125940 markers complete.
126000 markers complete.
126060 markers complete.
126120 markers complete.
126180 markers complete.
126240 markers complete.
126300 markers complete.
126360 markers complete.
126420 markers complete.
126480 markers complete.
126540 markers complete.
126600 markers complete.
126660 markers complete.
126720 markers complete.
126780 markers complete.
126840 markers complete.
126900 markers complete.
126960 markers complete.
127020 markers complete.
127080 markers complete.
127140 markers complete.
127200 markers complete.
127260 markers complete.
127320 markers complete.
127380 markers complete.
127440 markers complete.
127500 markers complete.
127560 markers complete.
127620 markers complete.
127680 markers complete.
127740 markers complete.
127800 markers complete.
127860 markers complete.
127920 markers complete.
127980 markers complete.
128040 markers complete.
128100 markers complete.
128160 markers complete.
128220 markers complete.
128280 markers complete.
128340 markers complete.
128400 markers complete.
128460 markers complete.
128520 markers complete.
128580 markers complete.
128640 markers complete.
128700 markers complete.
128760 markers complete.
128820 markers complete.
128880 markers complete.
128940 markers complete.
129000 markers complete.
129060 markers complete.
129120 markers complete.
129180 markers complete.
129240 markers complete.
129300 markers complete.
129360 markers complete.
129420 markers complete.
129480 markers complete.
129540 markers complete.
129600 markers complete.
129660 markers complete.
129720 markers complete.
129780 markers complete.
129840 markers complete.
129900 markers complete.
129960 markers complete.
130020 markers complete.
130080 markers complete.
130140 markers complete.
130200 markers complete.
130260 markers complete.
130320 markers complete.
130380 markers complete.
130440 markers complete.
130500 markers complete.
130560 markers complete.
130620 markers complete.
130680 markers complete.
130740 markers complete.
130800 markers complete.
130860 markers complete.
130920 markers complete.
130980 markers complete.
131040 markers complete.
131100 markers complete.
131160 markers complete.
131220 markers complete.
131280 markers complete.
131340 markers complete.
131400 markers complete.
131460 markers complete.
131520 markers complete.
131580 markers complete.
131640 markers complete.
131700 markers complete.
131760 markers complete.
131820 markers complete.
131880 markers complete.
131940 markers complete.
132000 markers complete.
132060 markers complete.
132120 markers complete.
132180 markers complete.
132240 markers complete.
132300 markers complete.
132360 markers complete.
132420 markers complete.
132480 markers complete.
132540 markers complete.
132600 markers complete.
132660 markers complete.
132720 markers complete.
132780 markers complete.
132840 markers complete.
132900 markers complete.
132960 markers complete.
133020 markers complete.
133080 markers complete.
133140 markers complete.
133200 markers complete.
133260 markers complete.
133320 markers complete.
133380 markers complete.
133440 markers complete.
133500 markers complete.
133560 markers complete.
133620 markers complete.
133680 markers complete.
133740 markers complete.
133800 markers complete.
133860 markers complete.
133920 markers complete.
133980 markers complete.
134040 markers complete.
134100 markers complete.
134160 markers complete.
134220 markers complete.
134280 markers complete.
134340 markers complete.
134400 markers complete.
134460 markers complete.
134520 markers complete.
134580 markers complete.
134640 markers complete.
134700 markers complete.
134760 markers complete.
134820 markers complete.
134880 markers complete.
134940 markers complete.
135000 markers complete.
135060 markers complete.
135120 markers complete.
135180 markers complete.
135240 markers complete.
135300 markers complete.
135360 markers complete.
135420 markers complete.
135480 markers complete.
135540 markers complete.
135600 markers complete.
135660 markers complete.
135720 markers complete.
135780 markers complete.
135840 markers complete.
135900 markers complete.
135960 markers complete.
136020 markers complete.
136080 markers complete.
136140 markers complete.
136200 markers complete.
136260 markers complete.
136320 markers complete.
136380 markers complete.
136440 markers complete.
136500 markers complete.
136560 markers complete.
136620 markers complete.
136680 markers complete.
136740 markers complete.
136800 markers complete.
136860 markers complete.
136920 markers complete.
136980 markers complete.
137040 markers complete.
137100 markers complete.
137160 markers complete.
137220 markers complete.
137280 markers complete.
137340 markers complete.
137400 markers complete.
137460 markers complete.
137520 markers complete.
137580 markers complete.
137640 markers complete.
137700 markers complete.
137760 markers complete.
137820 markers complete.
137880 markers complete.
137940 markers complete.
138000 markers complete.
138060 markers complete.
138120 markers complete.
138180 markers complete.
138240 markers complete.
138300 markers complete.
138360 markers complete.
138420 markers complete.
138480 markers complete.
138540 markers complete.
138600 markers complete.
138660 markers complete.
138720 markers complete.
138780 markers complete.
138840 markers complete.
138900 markers complete.
138960 markers complete.
139020 markers complete.
139080 markers complete.
139140 markers complete.
139200 markers complete.
139260 markers complete.
139320 markers complete.
139380 markers complete.
139440 markers complete.
139500 markers complete.
139560 markers complete.
139620 markers complete.
139680 markers complete.
139740 markers complete.
139800 markers complete.
139860 markers complete.
139920 markers complete.
139980 markers complete.
140040 markers complete.
140100 markers complete.
140160 markers complete.
140220 markers complete.
140280 markers complete.
140340 markers complete.
140400 markers complete.
140460 markers complete.
140520 markers complete.
140580 markers complete.
140640 markers complete.
140700 markers complete.
140760 markers complete.
140820 markers complete.
140880 markers complete.
140940 markers complete.
141000 markers complete.
141060 markers complete.
141120 markers complete.
141180 markers complete.
141240 markers complete.
141300 markers complete.
141360 markers complete.
141420 markers complete.
141480 markers complete.
141540 markers complete.
141600 markers complete.
141660 markers complete.
141720 markers complete.
141780 markers complete.
141840 markers complete.
141900 markers complete.
141960 markers complete.
142020 markers complete.
142080 markers complete.
142140 markers complete.
142200 markers complete.
142260 markers complete.
142320 markers complete.
142380 markers complete.
142440 markers complete.
142500 markers complete.
142560 markers complete.
142620 markers complete.
142680 markers complete.
142740 markers complete.
142800 markers complete.
142860 markers complete.
142920 markers complete.
142980 markers complete.
143040 markers complete.
143100 markers complete.
143160 markers complete.
143220 markers complete.
143280 markers complete.
143340 markers complete.
143400 markers complete.
143460 markers complete.
143520 markers complete.
143580 markers complete.
143640 markers complete.
143700 markers complete.
143760 markers complete.
143820 markers complete.
143880 markers complete.
143940 markers complete.
144000 markers complete.
144060 markers complete.
144120 markers complete.
144180 markers complete.
144240 markers complete.
144300 markers complete.
144360 markers complete.
144420 markers complete.
144480 markers complete.
144540 markers complete.
144600 markers complete.
144660 markers complete.
144720 markers complete.
144780 markers complete.
144840 markers complete.
144900 markers complete.
144960 markers complete.
145020 markers complete.
145080 markers complete.
145140 markers complete.
145200 markers complete.
145260 markers complete.
145320 markers complete.
145380 markers complete.
145440 markers complete.
145500 markers complete.
145560 markers complete.
145620 markers complete.
145680 markers complete.
145740 markers complete.
145800 markers complete.
145860 markers complete.
145920 markers complete.
145980 markers complete.
146040 markers complete.
146100 markers complete.
146160 markers complete.
146220 markers complete.
146280 markers complete.
146340 markers complete.
146400 markers complete.
146460 markers complete.
146520 markers complete.
146580 markers complete.
146640 markers complete.
146700 markers complete.
146760 markers complete.
146820 markers complete.
146880 markers complete.
146940 markers complete.
147000 markers complete.
147060 markers complete.
147120 markers complete.
147180 markers complete.
147240 markers complete.
147300 markers complete.
147360 markers complete.
147420 markers complete.
147480 markers complete.
147540 markers complete.
147600 markers complete.
147660 markers complete.
147720 markers complete.
147780 markers complete.
147840 markers complete.
147900 markers complete.
147960 markers complete.
148020 markers complete.
148080 markers complete.
148140 markers complete.
148200 markers complete.
148260 markers complete.
148320 markers complete.
148380 markers complete.
148440 markers complete.
148500 markers complete.
148560 markers complete.
148620 markers complete.
148680 markers complete.
148740 markers complete.
148800 markers complete.
148860 markers complete.
148920 markers complete.
148980 markers complete.
149040 markers complete.
149100 markers complete.
149160 markers complete.
149220 markers complete.
149280 markers complete.
149340 markers complete.
149400 markers complete.
149460 markers complete.
149520 markers complete.
149580 markers complete.
149640 markers complete.
149700 markers complete.
149760 markers complete.
149820 markers complete.
149880 markers complete.
149940 markers complete.
150000 markers complete.
150060 markers complete.
150120 markers complete.
150180 markers complete.
150240 markers complete.
150300 markers complete.
150360 markers complete.
150420 markers complete.
150480 markers complete.
150540 markers complete.
150600 markers complete.
150660 markers complete.
150720 markers complete.
150780 markers complete.
150840 markers complete.
150900 markers complete.
150960 markers complete.
151020 markers complete.
151080 markers complete.
151140 markers complete.
151200 markers complete.
151260 markers complete.
151320 markers complete.
151380 markers complete.
151440 markers complete.
151500 markers complete.
151560 markers complete.
151620 markers complete.
151680 markers complete.
151740 markers complete.
151800 markers complete.
151860 markers complete.
151920 markers complete.
151980 markers complete.
152040 markers complete.
152100 markers complete.
152160 markers complete.
152220 markers complete.
152280 markers complete.
152340 markers complete.
152400 markers complete.
152460 markers complete.
152520 markers complete.
152580 markers complete.
152640 markers complete.
152700 markers complete.
152760 markers complete.
152820 markers complete.
152880 markers complete.
152940 markers complete.
153000 markers complete.
153060 markers complete.
153120 markers complete.
153180 markers complete.
153240 markers complete.
153300 markers complete.
153360 markers complete.
153420 markers complete.
153480 markers complete.
153540 markers complete.
153600 markers complete.
153660 markers complete.
153720 markers complete.
153780 markers complete.
153840 markers complete.
153900 markers complete.
153960 markers complete.
154020 markers complete.
154080 markers complete.
154140 markers complete.
154200 markers complete.
154260 markers complete.
154320 markers complete.
154380 markers complete.
154440 markers complete.
154500 markers complete.
154560 markers complete.
154620 markers complete.
154680 markers complete.
154740 markers complete.
154800 markers complete.
154860 markers complete.
154920 markers complete.
154980 markers complete.
155040 markers complete.
155100 markers complete.
155160 markers complete.
155220 markers complete.
155280 markers complete.
155340 markers complete.
155400 markers complete.
155460 markers complete.
155520 markers complete.
155580 markers complete.
155640 markers complete.
155700 markers complete.
155760 markers complete.
155820 markers complete.
155880 markers complete.
155940 markers complete.
156000 markers complete.
156060 markers complete.
156120 markers complete.
156180 markers complete.
156240 markers complete.
156300 markers complete.
156360 markers complete.
156420 markers complete.
156480 markers complete.
156540 markers complete.
156600 markers complete.
156660 markers complete.
156720 markers complete.
156780 markers complete.
156840 markers complete.
156900 markers complete.
156960 markers complete.
157020 markers complete.
157080 markers complete.
157140 markers complete.
157200 markers complete.
157260 markers complete.
157320 markers complete.
157380 markers complete.
157440 markers complete.
157500 markers complete.
157560 markers complete.
157620 markers complete.
157680 markers complete.
157740 markers complete.
157800 markers complete.
157860 markers complete.
157920 markers complete.
157980 markers complete.
158040 markers complete.
158100 markers complete.
158160 markers complete.
158220 markers complete.
158280 markers complete.
158340 markers complete.
158400 markers complete.
158460 markers complete.
158520 markers complete.
158580 markers complete.
158640 markers complete.
158700 markers complete.
158760 markers complete.
158820 markers complete.
158880 markers complete.
158940 markers complete.
159000 markers complete.
159060 markers complete.
159120 markers complete.
159180 markers complete.
159240 markers complete.
159300 markers complete.
159360 markers complete.
159420 markers complete.
159480 markers complete.
159540 markers complete.
159600 markers complete.
159660 markers complete.
159720 markers complete.
159780 markers complete.
159840 markers complete.
159900 markers complete.
159960 markers complete.
160020 markers complete.
160080 markers complete.
160140 markers complete.
160200 markers complete.
160260 markers complete.
160320 markers complete.
160380 markers complete.
160440 markers complete.
160500 markers complete.
160560 markers complete.
160620 markers complete.
160680 markers complete.
160740 markers complete.
160800 markers complete.
160860 markers complete.
160920 markers complete.
160980 markers complete.
161040 markers complete.
161100 markers complete.
161160 markers complete.
161220 markers complete.
161280 markers complete.
161340 markers complete.
161400 markers complete.
161460 markers complete.
161520 markers complete.
161580 markers complete.
161640 markers complete.
161700 markers complete.
161760 markers complete.
161820 markers complete.
161880 markers complete.
161940 markers complete.
162000 markers complete.
162060 markers complete.
162120 markers complete.
162180 markers complete.
162240 markers complete.
162300 markers complete.
162360 markers complete.
162420 markers complete.
162480 markers complete.
162540 markers complete.
162600 markers complete.
162660 markers complete.
162720 markers complete.
162780 markers complete.
162840 markers complete.
162900 markers complete.
162960 markers complete.
163020 markers complete.
163080 markers complete.
163140 markers complete.
163200 markers complete.
163260 markers complete.
163320 markers complete.
163380 markers complete.
163440 markers complete.
163500 markers complete.
163560 markers complete.
163620 markers complete.
163680 markers complete.
163740 markers complete.
163800 markers complete.
163860 markers complete.
163920 markers complete.
163980 markers complete.
164040 markers complete.
164100 markers complete.
164160 markers complete.
164220 markers complete.
164280 markers complete.
164340 markers complete.
164400 markers complete.
164460 markers complete.
164520 markers complete.
164580 markers complete.
164640 markers complete.
164700 markers complete.
164760 markers complete.
164820 markers complete.
164880 markers complete.
164940 markers complete.
165000 markers complete.
165060 markers complete.
165120 markers complete.
165180 markers complete.
165240 markers complete.
165300 markers complete.
165360 markers complete.
165420 markers complete.
165480 markers complete.
165540 markers complete.
165600 markers complete.
165660 markers complete.
165720 markers complete.
165780 markers complete.
165840 markers complete.
165900 markers complete.
165960 markers complete.
166020 markers complete.
166080 markers complete.
166140 markers complete.
166200 markers complete.
166260 markers complete.
166320 markers complete.
166380 markers complete.
166440 markers complete.
166500 markers complete.
166560 markers complete.
166620 markers complete.
166680 markers complete.
166740 markers complete.
166800 markers complete.
166860 markers complete.
166920 markers complete.
166980 markers complete.
167040 markers complete.
167100 markers complete.
167160 markers complete.
167220 markers complete.
167280 markers complete.
167340 markers complete.
167400 markers complete.
167460 markers complete.
167520 markers complete.
167580 markers complete.
167640 markers complete.
167700 markers complete.
167760 markers complete.
167820 markers complete.
167880 markers complete.
167940 markers complete.
168000 markers complete.
168060 markers complete.
168120 markers complete.
168180 markers complete.
168240 markers complete.
168300 markers complete.
168360 markers complete.
168420 markers complete.
168480 markers complete.
168540 markers complete.
168600 markers complete.
168660 markers complete.
168720 markers complete.
168780 markers complete.
168840 markers complete.
168900 markers complete.
168960 markers complete.
169020 markers complete.
169080 markers complete.
169140 markers complete.
169200 markers complete.
169260 markers complete.
169320 markers complete.
169380 markers complete.
169440 markers complete.
169500 markers complete.
169560 markers complete.
169620 markers complete.
169680 markers complete.
169740 markers complete.
169800 markers complete.
169860 markers complete.
169920 markers complete.
169980 markers complete.
170040 markers complete.
170100 markers complete.
170160 markers complete.
170220 markers complete.
170280 markers complete.
170340 markers complete.
170400 markers complete.
170460 markers complete.
170520 markers complete.
170580 markers complete.
170640 markers complete.
170700 markers complete.
170760 markers complete.
170820 markers complete.
170880 markers complete.
170940 markers complete.
171000 markers complete.
171060 markers complete.
171120 markers complete.
171180 markers complete.
171240 markers complete.
171300 markers complete.
171360 markers complete.
171420 markers complete.
171480 markers complete.
171540 markers complete.
171600 markers complete.
171660 markers complete.
171720 markers complete.
171780 markers complete.
171840 markers complete.
171900 markers complete.
171960 markers complete.
172020 markers complete.
172080 markers complete.
172140 markers complete.
172200 markers complete.
172260 markers complete.
172320 markers complete.
172380 markers complete.
172440 markers complete.
172500 markers complete.
172560 markers complete.
172620 markers complete.
172680 markers complete.
172740 markers complete.
172800 markers complete.
172860 markers complete.
172878 markers complete.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/train_data.eigenval and
SampleData1/Fold_0/train_data.eigenvec .
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Calculating Relatedness Matrix ... 
================================================== 100%
**** INFO: Done.
./gemma --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned -beta SampleData1/gemma.txt -k output/SampleData1/Fold_0/gemma-lmm/gemma.cXX.txt -lmm 3 -o SampleData1/Fold_0/gemma-lmm/gemma
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Start Eigen-Decomposition...
pve estimate =0.999937
se(pve) =0.00519386
================================================== 100%
**** INFO: Done.
   chr           rs      ps  n_miss allele1 allele0     af      beta  \
0    1   rs79373928  801536       0       G       T  0.014 -0.023587   
1    1    rs4970382  840753       0       C       T  0.407 -0.066634   
2    1   rs13303222  849998       0       A       G  0.196 -0.012971   
3    1   rs72631889  851390       0       T       G  0.034  0.310745   
4    1  rs192998324  862772       0       G       A  0.028 -0.253905   

         se   p_score  
0  0.287785  0.934549  
1  0.066285  0.314783  
2  0.086208  0.880170  
3  0.192044  0.106760  
4  0.212404  0.232334  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99% done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/test_data.*.profile.
Continous Phenotype!
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --indep-pairwise 200 50 0.25
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
491952 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Pruned 18860 variants from chromosome 1, leaving 20363.
Pruned 19645 variants from chromosome 2, leaving 20067.
Pruned 16414 variants from chromosome 3, leaving 17080.
Pruned 15404 variants from chromosome 4, leaving 16035.
Pruned 14196 variants from chromosome 5, leaving 15379.
Pruned 19368 variants from chromosome 6, leaving 14770.
Pruned 13110 variants from chromosome 7, leaving 13997.
Pruned 12431 variants from chromosome 8, leaving 12966.
Pruned 9982 variants from chromosome 9, leaving 11477.
Pruned 11999 variants from chromosome 10, leaving 12850.
Pruned 12156 variants from chromosome 11, leaving 12221.
Pruned 10979 variants from chromosome 12, leaving 12050.
Pruned 7923 variants from chromosome 13, leaving 9247.
Pruned 7624 variants from chromosome 14, leaving 8448.
Pruned 7387 variants from chromosome 15, leaving 8145.
Pruned 8063 variants from chromosome 16, leaving 8955.
Pruned 7483 variants from chromosome 17, leaving 8361.
Pruned 6767 variants from chromosome 18, leaving 8240.
Pruned 6438 variants from chromosome 19, leaving 6432.
Pruned 5972 variants from chromosome 20, leaving 7202.
Pruned 3426 variants from chromosome 21, leaving 4102.
Pruned 3801 variants from chromosome 22, leaving 4137.
Pruning complete.  239428 of 491952 variants removed.
Marker lists written to SampleData1/Fold_0/train_data.prune.in and
SampleData1/Fold_0/train_data.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --clump SampleData1/SampleData1.txt
  --clump-field P
  --clump-kb 200
  --clump-p1 1
  --clump-r2 0.1
  --clump-snp-field SNP
  --extract SampleData1/Fold_0/train_data.prune.in
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 252524 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
252524 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--clump: 172878 clumps formed from 252524 top variants.
Results written to SampleData1/Fold_0/train_data.clumped .
Warning: 'rs3134762' is missing from the main dataset, and is a top variant.
Warning: 'rs3132505' is missing from the main dataset, and is a top variant.
Warning: 'rs3130424' is missing from the main dataset, and is a top variant.
247090 more top variant IDs missing; see log file.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.QC.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/train_data.QC.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/train_data.QC.clumped.pruned.bed +
SampleData1/Fold_0/train_data.QC.clumped.pruned.bim +
SampleData1/Fold_0/train_data.QC.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 2 variants from chromosome 1, leaving 14011.
Pruned 2 variants from chromosome 2, leaving 13811.
Pruned 2 variants from chromosome 3, leaving 11783.
Pruned 0 variants from chromosome 4, leaving 11041.
Pruned 1 variant from chromosome 5, leaving 10631.
Pruned 50 variants from chromosome 6, leaving 10018.
Pruned 0 variants from chromosome 7, leaving 9496.
Pruned 4 variants from chromosome 8, leaving 8863.
Pruned 0 variants from chromosome 9, leaving 7768.
Pruned 5 variants from chromosome 10, leaving 8819.
Pruned 10 variants from chromosome 11, leaving 8410.
Pruned 0 variants from chromosome 12, leaving 8198.
Pruned 0 variants from chromosome 13, leaving 6350.
Pruned 1 variant from chromosome 14, leaving 5741.
Pruned 0 variants from chromosome 15, leaving 5569.
Pruned 2 variants from chromosome 16, leaving 6067.
Pruned 1 variant from chromosome 17, leaving 5722.
Pruned 0 variants from chromosome 18, leaving 5578.
Pruned 0 variants from chromosome 19, leaving 4364.
Pruned 0 variants from chromosome 20, leaving 4916.
Pruned 0 variants from chromosome 21, leaving 2811.
Pruned 0 variants from chromosome 22, leaving 2831.
Pruning complete.  80 of 172878 variants removed.
Marker lists written to
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.in and
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/test_data.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
551892 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/test_data.clumped.pruned.bed +
SampleData1/Fold_0/test_data.clumped.pruned.bim +
SampleData1/Fold_0/test_data.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 1829 variants from chromosome 1, leaving 12184.
Pruned 1861 variants from chromosome 2, leaving 11952.
Pruned 1567 variants from chromosome 3, leaving 10218.
Pruned 1415 variants from chromosome 4, leaving 9626.
Pruned 1347 variants from chromosome 5, leaving 9285.
Pruned 1291 variants from chromosome 6, leaving 8777.
Pruned 1238 variants from chromosome 7, leaving 8258.
Pruned 1144 variants from chromosome 8, leaving 7723.
Pruned 902 variants from chromosome 9, leaving 6866.
Pruned 1090 variants from chromosome 10, leaving 7734.
Pruned 1036 variants from chromosome 11, leaving 7384.
Pruned 1061 variants from chromosome 12, leaving 7137.
Pruned 771 variants from chromosome 13, leaving 5579.
Pruned 683 variants from chromosome 14, leaving 5059.
Pruned 603 variants from chromosome 15, leaving 4966.
Pruned 710 variants from chromosome 16, leaving 5359.
Pruned 605 variants from chromosome 17, leaving 5118.
Pruned 648 variants from chromosome 18, leaving 4930.
Pruned 384 variants from chromosome 19, leaving 3980.
Pruned 559 variants from chromosome 20, leaving 4357.
Pruned 297 variants from chromosome 21, leaving 2514.
Pruned 276 variants from chromosome 22, leaving 2555.
Pruning complete.  21317 of 172878 variants removed.
Marker lists written to SampleData1/Fold_0/test_data.clumped.pruned.prune.in
and SampleData1/Fold_0/test_data.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/test_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/test_data.eigenval and
SampleData1/Fold_0/test_data.eigenvec .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/train_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/train_data.eigenval and
SampleData1/Fold_0/train_data.eigenvec .
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Calculating Relatedness Matrix ... 
================================================== 100%
./gemma --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned -beta SampleData1/gemma.txt -k output/SampleData1/Fold_0/gemma-lmm/gemma.sXX.txt -lmm 1 -o SampleData1/Fold_0/gemma-lmm/gemma
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
**** INFO: Done.
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Start Eigen-Decomposition...
pve estimate =0.99999
se(pve) =0.000240887
================================================== 100%
**** INFO: Done.
   chr           rs      ps  n_miss allele1 allele0     af      beta  \
0    1   rs79373928  801536       0       G       T  0.014 -0.034590   
1    1    rs4970382  840753       0       C       T  0.407 -0.065849   
2    1   rs13303222  849998       0       A       G  0.196 -0.010849   
3    1   rs72631889  851390       0       T       G  0.034  0.318789   
4    1  rs192998324  862772       0       G       A  0.028 -0.252127   

         se   logl_H1   l_remle    p_wald  
0  0.290530 -514.5188  100000.0  0.905293  
1  0.066394 -514.0403  100000.0  0.321934  
2  0.086352 -514.5194  100000.0  0.900088  
3  0.194228 -513.1735  100000.0  0.101565  
4  0.214816 -513.8290  100000.0  0.241260  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99% done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/test_data.*.profile.
Continous Phenotype!
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --indep-pairwise 200 50 0.25
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
491952 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Pruned 18860 variants from chromosome 1, leaving 20363.
Pruned 19645 variants from chromosome 2, leaving 20067.
Pruned 16414 variants from chromosome 3, leaving 17080.
Pruned 15404 variants from chromosome 4, leaving 16035.
Pruned 14196 variants from chromosome 5, leaving 15379.
Pruned 19368 variants from chromosome 6, leaving 14770.
Pruned 13110 variants from chromosome 7, leaving 13997.
Pruned 12431 variants from chromosome 8, leaving 12966.
Pruned 9982 variants from chromosome 9, leaving 11477.
Pruned 11999 variants from chromosome 10, leaving 12850.
Pruned 12156 variants from chromosome 11, leaving 12221.
Pruned 10979 variants from chromosome 12, leaving 12050.
Pruned 7923 variants from chromosome 13, leaving 9247.
Pruned 7624 variants from chromosome 14, leaving 8448.
Pruned 7387 variants from chromosome 15, leaving 8145.
Pruned 8063 variants from chromosome 16, leaving 8955.
Pruned 7483 variants from chromosome 17, leaving 8361.
Pruned 6767 variants from chromosome 18, leaving 8240.
Pruned 6438 variants from chromosome 19, leaving 6432.
Pruned 5972 variants from chromosome 20, leaving 7202.
Pruned 3426 variants from chromosome 21, leaving 4102.
Pruned 3801 variants from chromosome 22, leaving 4137.
Pruning complete.  239428 of 491952 variants removed.
Marker lists written to SampleData1/Fold_0/train_data.prune.in and
SampleData1/Fold_0/train_data.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --clump SampleData1/SampleData1.txt
  --clump-field P
  --clump-kb 200
  --clump-p1 1
  --clump-r2 0.1
  --clump-snp-field SNP
  --extract SampleData1/Fold_0/train_data.prune.in
  --out SampleData1/Fold_0/train_data

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 252524 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999894.
252524 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--clump: 172878 clumps formed from 252524 top variants.
Results written to SampleData1/Fold_0/train_data.clumped .
Warning: 'rs3134762' is missing from the main dataset, and is a top variant.
Warning: 'rs3132505' is missing from the main dataset, and is a top variant.
Warning: 'rs3130424' is missing from the main dataset, and is a top variant.
247090 more top variant IDs missing; see log file.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.QC.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/train_data.QC.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
491952 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/train_data.QC.clumped.pruned.bed +
SampleData1/Fold_0/train_data.QC.clumped.pruned.bim +
SampleData1/Fold_0/train_data.QC.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 2 variants from chromosome 1, leaving 14011.
Pruned 2 variants from chromosome 2, leaving 13811.
Pruned 2 variants from chromosome 3, leaving 11783.
Pruned 0 variants from chromosome 4, leaving 11041.
Pruned 1 variant from chromosome 5, leaving 10631.
Pruned 50 variants from chromosome 6, leaving 10018.
Pruned 0 variants from chromosome 7, leaving 9496.
Pruned 4 variants from chromosome 8, leaving 8863.
Pruned 0 variants from chromosome 9, leaving 7768.
Pruned 5 variants from chromosome 10, leaving 8819.
Pruned 10 variants from chromosome 11, leaving 8410.
Pruned 0 variants from chromosome 12, leaving 8198.
Pruned 0 variants from chromosome 13, leaving 6350.
Pruned 1 variant from chromosome 14, leaving 5741.
Pruned 0 variants from chromosome 15, leaving 5569.
Pruned 2 variants from chromosome 16, leaving 6067.
Pruned 1 variant from chromosome 17, leaving 5722.
Pruned 0 variants from chromosome 18, leaving 5578.
Pruned 0 variants from chromosome 19, leaving 4364.
Pruned 0 variants from chromosome 20, leaving 4916.
Pruned 0 variants from chromosome 21, leaving 2811.
Pruned 0 variants from chromosome 22, leaving 2831.
Pruning complete.  80 of 172878 variants removed.
Marker lists written to
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.in and
SampleData1/Fold_0/train_data.QC.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.clumped.pruned.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data
  --extract SampleData1/Fold_0/train_data.valid.snp
  --indep-pairwise 200 50 0.25
  --make-bed
  --out SampleData1/Fold_0/test_data.clumped.pruned

63761 MB RAM detected; reserving 31880 MB for main workspace.
551892 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--make-bed to SampleData1/Fold_0/test_data.clumped.pruned.bed +
SampleData1/Fold_0/test_data.clumped.pruned.bim +
SampleData1/Fold_0/test_data.clumped.pruned.fam ... 101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899done.
Pruned 1829 variants from chromosome 1, leaving 12184.
Pruned 1861 variants from chromosome 2, leaving 11952.
Pruned 1567 variants from chromosome 3, leaving 10218.
Pruned 1415 variants from chromosome 4, leaving 9626.
Pruned 1347 variants from chromosome 5, leaving 9285.
Pruned 1291 variants from chromosome 6, leaving 8777.
Pruned 1238 variants from chromosome 7, leaving 8258.
Pruned 1144 variants from chromosome 8, leaving 7723.
Pruned 902 variants from chromosome 9, leaving 6866.
Pruned 1090 variants from chromosome 10, leaving 7734.
Pruned 1036 variants from chromosome 11, leaving 7384.
Pruned 1061 variants from chromosome 12, leaving 7137.
Pruned 771 variants from chromosome 13, leaving 5579.
Pruned 683 variants from chromosome 14, leaving 5059.
Pruned 603 variants from chromosome 15, leaving 4966.
Pruned 710 variants from chromosome 16, leaving 5359.
Pruned 605 variants from chromosome 17, leaving 5118.
Pruned 648 variants from chromosome 18, leaving 4930.
Pruned 384 variants from chromosome 19, leaving 3980.
Pruned 559 variants from chromosome 20, leaving 4357.
Pruned 297 variants from chromosome 21, leaving 2514.
Pruned 276 variants from chromosome 22, leaving 2555.
Pruning complete.  21317 of 172878 variants removed.
Marker lists written to SampleData1/Fold_0/test_data.clumped.pruned.prune.in
and SampleData1/Fold_0/test_data.clumped.pruned.prune.out .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/test_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/test_data.eigenval and
SampleData1/Fold_0/test_data.eigenvec .
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/train_data
  --pca 6

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using up to 8 threads (change this with --threads).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
Relationship matrix calculation complete.
--pca: Results saved to SampleData1/Fold_0/train_data.eigenval and
SampleData1/Fold_0/train_data.eigenvec .
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Calculating Relatedness Matrix ... 
================================================== 100%
./gemma --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned -beta SampleData1/gemma.txt -k output/SampleData1/Fold_0/gemma-lmm/gemma.sXX.txt -lmm 3 -o SampleData1/Fold_0/gemma-lmm/gemma
GEMMA 0.98.5 (2021-08-25) by Xiang Zhou, Pjotr Prins and team (C) 2012-2021
Reading Files ... 
**** INFO: Done.
## number of total individuals = 380
## number of analyzed individuals = 380
## number of covariates = 1
## number of phenotypes = 1
## number of total SNPs/var        =   172878
## number of analyzed SNPs         =   172878
Start Eigen-Decomposition...
pve estimate =0.99999
se(pve) =0.000240887
================================================== 100%
**** INFO: Done.
   chr           rs      ps  n_miss allele1 allele0     af      beta  \
0    1   rs79373928  801536       0       G       T  0.014 -0.034590   
1    1    rs4970382  840753       0       C       T  0.407 -0.065849   
2    1   rs13303222  849998       0       A       G  0.196 -0.010849   
3    1   rs72631889  851390       0       T       G  0.034  0.318789   
4    1  rs192998324  862772       0       G       A  0.028 -0.252127   

         se   p_score  
0  0.290530  0.905046  
1  0.066394  0.321287  
2  0.086352  0.899828  
3  0.194228  0.101876  
4  0.214816  0.240875  
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/train_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/train_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is 0.999891.
172878 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023)           www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang   GNU General Public License v3
Logging to SampleData1/Fold_0/GEMMA-LMM/test_data.log.
Options in effect:
  --bfile SampleData1/Fold_0/test_data.clumped.pruned
  --extract SampleData1/Fold_0/train_data.valid.snp
  --out SampleData1/Fold_0/GEMMA-LMM/test_data
  --q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
  --score output/SampleData1/Fold_0/gemma-lmm/gemma.assoc.txt 1 2 3 header

63761 MB RAM detected; reserving 31880 MB for main workspace.
172878 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 172878 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99% done.
Total genotyping rate is 0.999891.
172878 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 172878 valid predictors loaded.
Warning: 326740 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/GEMMA-LMM/test_data.*.profile.
Continous Phenotype!

Repeat the process for each fold.#

Change the foldnumber variable.

#foldnumber = sys.argv[1]
foldnumber = "0"  # Setting 'foldnumber' to "0"

Or uncomment the following line:

# foldnumber = sys.argv[1]
python GEMMA-LMM.py 0
python GEMMA-LMM.py 1
python GEMMA-LMM.py 2
python GEMMA-LMM.py 3
python GEMMA-LMM.py 4

The following files should exist after the execution:

  1. SampleData1/Fold_0/GEMMA-LMM/Results.csv

  2. SampleData1/Fold_1/GEMMA-LMM/Results.csv

  3. SampleData1/Fold_2/GEMMA-LMM/Results.csv

  4. SampleData1/Fold_3/GEMMA-LMM/Results.csv

  5. SampleData1/Fold_4/GEMMA-LMM/Results.csv

Check the results file for each fold.#

import os

 
# List of file names to check for existence
f = [
    "./"+filedirec+"/Fold_0"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_1"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_2"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_3"+os.sep+result_directory+"Results.csv",
    "./"+filedirec+"/Fold_4"+os.sep+result_directory+"Results.csv",
]

 

# Loop through each file name in the list
for loop in range(0,5):
    # Check if the file exists in the specified directory for the given fold
    if os.path.exists(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv"):
        temp = pd.read_csv(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv")
        print("Fold_",loop, "Yes, the file exists.")
        #print(temp.head())
        print("Number of P-values processed: ",len(temp))
        # Print a message indicating that the file exists
    
    else:
        # Print a message indicating that the file does not exist
        print("Fold_",loop, "No, the file does not exist.")
Fold_ 0 Yes, the file exists.
Number of P-values processed:  80
Fold_ 1 Yes, the file exists.
Number of P-values processed:  80
Fold_ 2 Yes, the file exists.
Number of P-values processed:  80
Fold_ 3 Yes, the file exists.
Number of P-values processed:  80
Fold_ 4 Yes, the file exists.
Number of P-values processed:  80

Sum the results for each fold.#

print("We have to ensure when we sum the entries across all Folds, the same rows are merged!")

def sum_and_average_columns(data_frames):
    """Sum and average numerical columns across multiple DataFrames, and keep non-numerical columns unchanged."""
    # Initialize DataFrame to store the summed results for numerical columns
    summed_df = pd.DataFrame()
    non_numerical_df = pd.DataFrame()
    
    for df in data_frames:
        # Identify numerical and non-numerical columns
        numerical_cols = df.select_dtypes(include=[np.number]).columns
        non_numerical_cols = df.select_dtypes(exclude=[np.number]).columns
        
        # Sum numerical columns
        if summed_df.empty:
            summed_df = pd.DataFrame(0, index=range(len(df)), columns=numerical_cols)
        
        summed_df[numerical_cols] = summed_df[numerical_cols].add(df[numerical_cols], fill_value=0)
        
        # Keep non-numerical columns (take the first non-numerical entry for each column)
        if non_numerical_df.empty:
            non_numerical_df = df[non_numerical_cols]
        else:
            non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
    
    # Divide the summed values by the number of dataframes to get the average
    averaged_df = summed_df / len(data_frames)
    
    # Combine numerical and non-numerical DataFrames
    result_df = pd.concat([averaged_df, non_numerical_df], axis=1)
    
    return result_df

from functools import reduce

import os
import pandas as pd
from functools import reduce

def find_common_rows(allfoldsframe):
    # Define the performance columns that need to be excluded
    performance_columns = [
        'Train_null_model', 'Train_pure_prs', 'Train_best_model',
        'Test_pure_prs', 'Test_null_model', 'Test_best_model'
    ]
    important_columns = [
        'clump_p1',
        'clump_r2',
        'clump_kb',
        'p_window_size',
        'p_slide_size',
        'p_LD_threshold',
        'pvalue',
        'referencepanel',
         
       
        
        "gemmamodel",
        "relatedmatrixname",
        "lmmmodel",

        'numberofpca',
        'tempalpha',
        'l1weight',
         
       
    ]
    # Function to remove performance columns from a DataFrame
    def drop_performance_columns(df):
        return df.drop(columns=performance_columns, errors='ignore')
    
    def get_important_columns(df ):
        existing_columns = [col for col in important_columns if col in df.columns]
        if existing_columns:
            return df[existing_columns].copy()
        else:
            return pd.DataFrame()

    # Drop performance columns from all DataFrames in the list
    allfoldsframe_dropped = [drop_performance_columns(df) for df in allfoldsframe]
    
    # Get the important columns.
    allfoldsframe_dropped = [get_important_columns(df) for df in allfoldsframe_dropped]    
    
    # Iteratively find common rows and track unique and common rows
    common_rows = allfoldsframe_dropped[0]
    for i in range(1, len(allfoldsframe_dropped)):
        # Get the next DataFrame
        next_df = allfoldsframe_dropped[i]

        # Count unique rows in the current DataFrame and the next DataFrame
        unique_in_common = common_rows.shape[0]
        unique_in_next = next_df.shape[0]

        # Find common rows between the current common_rows and the next DataFrame
        common_rows = pd.merge(common_rows, next_df, how='inner')
    
        # Count the common rows after merging
        common_count = common_rows.shape[0]

        # Print the unique and common row counts
        print(f"Iteration {i}:")
        print(f"Unique rows in current common DataFrame: {unique_in_common}")
        print(f"Unique rows in next DataFrame: {unique_in_next}")
        print(f"Common rows after merge: {common_count}\n")
    # Now that we have the common rows, extract these from the original DataFrames
 
    extracted_common_rows_frames = []
    for original_df in allfoldsframe:
        # Merge the common rows with the original DataFrame, keeping only the rows that match the common rows
        extracted_common_rows = pd.merge(common_rows, original_df, how='inner', on=common_rows.columns.tolist())
        
        # Add the DataFrame with the extracted common rows to the list
        extracted_common_rows_frames.append(extracted_common_rows)

    # Print the number of rows in the common DataFrames
    for i, df in enumerate(extracted_common_rows_frames):
        print(f"DataFrame {i + 1} with extracted common rows has {df.shape[0]} rows.")

    # Return the list of DataFrames with extracted common rows
    return extracted_common_rows_frames



# Example usage (assuming allfoldsframe is populated as shown earlier):
allfoldsframe = []

# Loop through each file name in the list
for loop in range(0, 5):
    # Check if the file exists in the specified directory for the given fold
    file_path = os.path.join(filedirec, "Fold_" + str(loop), result_directory, "Results.csv")
    if os.path.exists(file_path):
        temp = pd.read_csv(file_path)
 
        allfoldsframe.append(temp)
        
        
        # Print a message indicating that the file exists
        print("Fold_", loop, "Yes, the file exists.")
    else:
        # Print a message indicating that the file does not exist
        print("Fold_", loop, "No, the file does not exist.")

# Find the common rows across all folds and return the list of extracted common rows
extracted_common_rows_list = find_common_rows(allfoldsframe)
 
# Sum the values column-wise
# For string values, do not sum it the values are going to be the same for each fold.
# Only sum the numeric values.

divided_result = sum_and_average_columns(extracted_common_rows_list)
  
print(divided_result)

 
We have to ensure when we sum the entries across all Folds, the same rows are merged!
Fold_ 0 Yes, the file exists.
Fold_ 1 Yes, the file exists.
Fold_ 2 Yes, the file exists.
Fold_ 3 Yes, the file exists.
Fold_ 4 Yes, the file exists.
Iteration 1:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

Iteration 2:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

Iteration 3:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

Iteration 4:
Unique rows in current common DataFrame: 80
Unique rows in next DataFrame: 80
Common rows after merge: 80

DataFrame 1 with extracted common rows has 80 rows.
DataFrame 2 with extracted common rows has 80 rows.
DataFrame 3 with extracted common rows has 80 rows.
DataFrame 4 with extracted common rows has 80 rows.
DataFrame 5 with extracted common rows has 80 rows.
    clump_p1  clump_r2  clump_kb  p_window_size  p_slide_size  p_LD_threshold  \
0        1.0       0.1     200.0          200.0          50.0            0.25   
1        1.0       0.1     200.0          200.0          50.0            0.25   
2        1.0       0.1     200.0          200.0          50.0            0.25   
3        1.0       0.1     200.0          200.0          50.0            0.25   
4        1.0       0.1     200.0          200.0          50.0            0.25   
..       ...       ...       ...            ...           ...             ...   
75       1.0       0.1     200.0          200.0          50.0            0.25   
76       1.0       0.1     200.0          200.0          50.0            0.25   
77       1.0       0.1     200.0          200.0          50.0            0.25   
78       1.0       0.1     200.0          200.0          50.0            0.25   
79       1.0       0.1     200.0          200.0          50.0            0.25   

          pvalue  lmmmodel  numberofpca  tempalpha  l1weight  Train_pure_prs  \
0   1.000000e-10       1.0          6.0        0.1       0.1        0.002511   
1   3.359818e-10       1.0          6.0        0.1       0.1        0.002389   
2   1.128838e-09       1.0          6.0        0.1       0.1        0.002394   
3   3.792690e-09       1.0          6.0        0.1       0.1        0.002256   
4   1.274275e-08       1.0          6.0        0.1       0.1        0.002217   
..           ...       ...          ...        ...       ...             ...   
75  7.847600e-03       3.0          6.0        0.1       0.1        0.002688   
76  2.636651e-02       3.0          6.0        0.1       0.1        0.002669   
77  8.858668e-02       3.0          6.0        0.1       0.1        0.002672   
78  2.976351e-01       3.0          6.0        0.1       0.1        0.002655   
79  1.000000e+00       3.0          6.0        0.1       0.1        0.002637   

    Train_null_model  Train_best_model  Test_pure_prs  Test_null_model  \
0            0.23001          0.512298      -0.000184         0.118692   
1            0.23001          0.537343       0.000061         0.118692   
2            0.23001          0.576171       0.000179         0.118692   
3            0.23001          0.604608       0.000094         0.118692   
4            0.23001          0.628672      -0.000123         0.118692   
..               ...               ...            ...              ...   
75           0.23001          0.986182       0.000068         0.118692   
76           0.23001          0.991385       0.000041         0.118692   
77           0.23001          0.995607       0.000054         0.118692   
78           0.23001          0.998280       0.000040         0.118692   
79           0.23001          0.999995       0.000026         0.118692   

    Test_best_model gemmamodel relatedmatrixname  
0         -0.097182        lmm          centered  
1         -0.069128        lmm          centered  
2         -0.060050        lmm          centered  
3         -0.073853        lmm          centered  
4         -0.043680        lmm          centered  
..              ...        ...               ...  
75         0.029503        lmm      standardized  
76         0.016414        lmm      standardized  
77         0.028479        lmm      standardized  
78         0.021569        lmm      standardized  
79         0.014434        lmm      standardized  

[80 rows x 19 columns]
/tmp/ipykernel_72060/1042617485.py:24: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_72060/1042617485.py:24: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_72060/1042617485.py:24: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_72060/1042617485.py:24: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead

See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
  non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])

Results#

1. Reporting Based on Best Training Performance:#

  • One can report the results based on the best performance of the training data. For example, if for a specific combination of hyperparameters, the training performance is high, report the corresponding test performance.

  • Example code:

    df = divided_result.sort_values(by='Train_best_model', ascending=False)
    print(df.iloc[0].to_markdown())
    

Binary Phenotypes Result Analysis#

You can find the performance quality for binary phenotype using the following template:

PerformanceBinary

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.5

Moderate Performance

0.6 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most phenotypes, results typically fall in the moderate train and moderate test performance category.

Continuous Phenotypes Result Analysis#

You can find the performance quality for continuous phenotypes using the following template:

PerformanceContinous

This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.

We classified performance based on the following table:

Performance Level

Range

Low Performance

0 to 0.2

Moderate Performance

0.3 to 0.7

High Performance

0.8 to 1

You can match the performance based on the following scenarios:

Scenario

What’s Happening

Implication

High Test, High Train

The model performs well on both training and test datasets, effectively learning the underlying patterns.

The model is well-tuned, generalizes well, and makes accurate predictions on both datasets.

High Test, Moderate Train

The model generalizes well but may not be fully optimized on training data, missing some underlying patterns.

The model is fairly robust but may benefit from further tuning or more training to improve its learning.

High Test, Low Train

An unusual scenario, potentially indicating data leakage or overestimation of test performance.

The model’s performance is likely unreliable; investigate potential data issues or random noise.

Moderate Test, High Train

The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns.

The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data.

Moderate Test, Moderate Train

The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others.

The model is moderately fitting; further improvements could be made in both training and generalization.

Moderate Test, Low Train

The model underperforms on training data and doesn’t generalize well, leading to moderate test performance.

The model may need more complexity, additional features, or better training to improve on both datasets.

Low Test, High Train

The model overfits the training data, performing poorly on the test set.

The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting.

Low Test, Low Train

The model performs poorly on both training and test datasets, failing to learn the data patterns effectively.

The model is underfitting; it may need more complexity, additional features, or more data to improve performance.

Recommendations for Publishing Results#

When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.

For most continuous phenotypes, results typically fall in the moderate train and moderate test performance category.

2. Reporting Generalized Performance:#

  • One can also report the generalized performance by calculating the difference between the training and test performance, and the sum of the test and training performance. Report the result or hyperparameter combination for which the sum is high and the difference is minimal.

  • Example code:

    df = divided_result.copy()
    df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
    df['Sum'] = df['Train_best_model'] + df['Test_best_model']
    
    sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
    print(sorted_df.iloc[0].to_markdown())
    

3. Reporting Hyperparameters Affecting Test and Train Performance:#

  • Find the hyperparameters that have more than one unique value and calculate their correlation with the following columns to understand how they are affecting the performance of train and test sets:

    • Train_null_model

    • Train_pure_prs

    • Train_best_model

    • Test_pure_prs

    • Test_null_model

    • Test_best_model

4. Other Analysis#

  1. Once you have the results, you can find how hyperparameters affect the model performance.

  2. Analysis, like overfitting and underfitting, can be performed as well.

  3. The way you are going to report the results can vary.

  4. Results can be visualized, and other patterns in the data can be explored.

import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib notebook

import matplotlib
import numpy as np
import matplotlib.pyplot as plt

df = divided_result.sort_values(by='Train_best_model', ascending=False)
print("1. Reporting Based on Best Training Performance:\n")
print(df.iloc[0].to_markdown())


 
df = divided_result.copy()

# Plot Train and Test best models against p-values
plt.figure(figsize=(10, 6))
plt.plot(df['pvalue'], df['Train_best_model'], label='Train_best_model', marker='o', color='royalblue')
plt.plot(df['pvalue'], df['Test_best_model'], label='Test_best_model', marker='o', color='darkorange')

# Highlight the p-value where both train and test are high
best_index = df[['Train_best_model']].sum(axis=1).idxmax()
best_pvalue = df.loc[best_index, 'pvalue']
best_train = df.loc[best_index, 'Train_best_model']
best_test = df.loc[best_index, 'Test_best_model']

# Use dark colors for the circles
plt.scatter(best_pvalue, best_train, color='darkred', s=100, label=f'Best Performance (Train)', edgecolor='black', zorder=5)
plt.scatter(best_pvalue, best_test, color='darkblue', s=100, label=f'Best Performance (Test)', edgecolor='black', zorder=5)

# Annotate the best performance with p-value, train, and test values
plt.text(best_pvalue, best_train, f'p={best_pvalue:.4g}\nTrain={best_train:.4g}', ha='right', va='bottom', fontsize=9, color='darkred')
plt.text(best_pvalue, best_test, f'p={best_pvalue:.4g}\nTest={best_test:.4g}', ha='right', va='top', fontsize=9, color='darkblue')

# Calculate Difference and Sum
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']

# Sort the DataFrame
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
#sorted_df = df.sort_values(by=[ 'Difference','Sum'], ascending=[  True,False])

# Highlight the general performance
general_index = sorted_df.index[0]
general_pvalue = sorted_df.loc[general_index, 'pvalue']
general_train = sorted_df.loc[general_index, 'Train_best_model']
general_test = sorted_df.loc[general_index, 'Test_best_model']

plt.scatter(general_pvalue, general_train, color='darkgreen', s=150, label='General Performance (Train)', edgecolor='black', zorder=6)
plt.scatter(general_pvalue, general_test, color='darkorange', s=150, label='General Performance (Test)', edgecolor='black', zorder=6)

# Annotate the general performance with p-value, train, and test values
plt.text(general_pvalue, general_train, f'p={general_pvalue:.4g}\nTrain={general_train:.4g}', ha='left', va='bottom', fontsize=9, color='darkgreen')
plt.text(general_pvalue, general_test, f'p={general_pvalue:.4g}\nTest={general_test:.4g}', ha='left', va='top', fontsize=9, color='darkorange')

# Add labels and legend
plt.xlabel('p-value')
plt.ylabel('Model Performance')
plt.title('Train vs Test Best Models')
plt.legend()
plt.show()
 




print("2. Reporting Generalized Performance:\n")
df = divided_result.copy()
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
print(sorted_df.iloc[0].to_markdown())


print("3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':\n")

print("3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.")

print("3. We performed this analysis for those hyperparameters that have more than one unique value.")

correlation_columns = [
 'Train_null_model', 'Train_pure_prs', 'Train_best_model',
 'Test_pure_prs', 'Test_null_model', 'Test_best_model'
]

hyperparams = [col for col in divided_result.columns if len(divided_result[col].unique()) > 1]
hyperparams = list(set(hyperparams+correlation_columns))
 
# Separate numeric and string columns
numeric_hyperparams = [col for col in hyperparams if pd.api.types.is_numeric_dtype(divided_result[col])]
string_hyperparams = [col for col in hyperparams if pd.api.types.is_string_dtype(divided_result[col])]


# Encode string columns using one-hot encoding
divided_result_encoded = pd.get_dummies(divided_result, columns=string_hyperparams)

# Combine numeric hyperparams with the new one-hot encoded columns
encoded_columns = [col for col in divided_result_encoded.columns if col.startswith(tuple(string_hyperparams))]
hyperparams = numeric_hyperparams + encoded_columns
 

# Calculate correlations
correlations = divided_result_encoded[hyperparams].corr()
 
# Display correlation of hyperparameters with train/test performance columns
hyperparam_correlations = correlations.loc[hyperparams, correlation_columns]
 
hyperparam_correlations = hyperparam_correlations.fillna(0)

# Plotting the correlation heatmap
plt.figure(figsize=(12, 8))
ax = sns.heatmap(hyperparam_correlations, annot=True, cmap='viridis', fmt='.2f', cbar=True)
ax.set_xticklabels(ax.get_xticklabels(), rotation=90, ha='right')

# Rotate y-axis labels to horizontal
#ax.set_yticklabels(ax.get_yticklabels(), rotation=0, va='center')

plt.title('Correlation of Hyperparameters with Train/Test Performance')
plt.show() 

sns.set_theme(style="whitegrid")  # Choose your preferred style
pairplot = sns.pairplot(divided_result_encoded[hyperparams],hue = 'Test_best_model', palette='viridis')

# Adjust the figure size
pairplot.fig.set_size_inches(15, 15)  # You can adjust the size as needed

for ax in pairplot.axes.flatten():
    ax.set_xlabel(ax.get_xlabel(), rotation=90, ha='right')  # X-axis labels vertical
    #ax.set_ylabel(ax.get_ylabel(), rotation=0, va='bottom')  # Y-axis labels horizontal

# Show the plot
plt.show()
1. Reporting Based on Best Training Performance:

|                   | 71                     |
|:------------------|:-----------------------|
| clump_p1          | 1.0                    |
| clump_r2          | 0.1                    |
| clump_kb          | 200.0                  |
| p_window_size     | 200.0                  |
| p_slide_size      | 50.0                   |
| p_LD_threshold    | 0.25                   |
| pvalue            | 1.0                    |
| lmmmodel          | 3.0                    |
| numberofpca       | 6.0                    |
| tempalpha         | 0.1                    |
| l1weight          | 0.1                    |
| Train_pure_prs    | 0.0026369729666351196  |
| Train_null_model  | 0.2300103041419895     |
| Train_best_model  | 0.9999954636773645     |
| Test_pure_prs     | 2.6446694881965273e-05 |
| Test_null_model   | 0.11869244971792067    |
| Test_best_model   | 0.01443396283243396    |
| gemmamodel        | lmm                    |
| relatedmatrixname | standardized           |
2. Reporting Generalized Performance:

|                   | 69                    |
|:------------------|:----------------------|
| clump_p1          | 1.0                   |
| clump_r2          | 0.1                   |
| clump_kb          | 200.0                 |
| p_window_size     | 200.0                 |
| p_slide_size      | 50.0                  |
| p_LD_threshold    | 0.25                  |
| pvalue            | 0.0885866790410083    |
| lmmmodel          | 3.0                   |
| numberofpca       | 6.0                   |
| tempalpha         | 0.1                   |
| l1weight          | 0.1                   |
| Train_pure_prs    | 0.0026719077578200795 |
| Train_null_model  | 0.2300103041419895    |
| Train_best_model  | 0.9956065498373914    |
| Test_pure_prs     | 5.372199896101465e-05 |
| Test_null_model   | 0.11869244971792067   |
| Test_best_model   | 0.028478885624523796  |
| gemmamodel        | lmm                   |
| relatedmatrixname | standardized          |
| Difference        | 0.9671276642128676    |
| Sum               | 1.024085435461915     |
3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':

3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.
3. We performed this analysis for those hyperparameters that have more than one unique value.