MTG2#
In this notebook, we will use MTG2 to calculate the PRS.
MTG2 is a computer program that implements a multivariate linear mixed model to fit complex covariance structures based on genomic information. It is the multivariate version of GCTA REML. It calculates the best linear unbiased prediction (BLUP) for quantifying genetic merits or genetic risk using the direct average information algorithm.
Note: MTG2 is a tool that can be used for multiple genomic calculations and operations. We just used the basic verison of this tool.
Documentation#
Download the Tool#
Installation#
To install MTG2, use the following commands:
mv MTG2_v2.22.zip\?dl\=1 MTG.zip
unzip MTG.zip
chmod u+x mtg2
./mtg2
You should see the following output upon successful installation:
******************************************************************
MTG2 version 2.22 (Oct2021)
******************************************************************
-p fam file -d dat file -g grm file ...
Note: MTG2 needs to be installed or placed in the same directory as this notebook.
GWAS file processing for MTG2 for Phenotypes.#
GWAS is not required in the process we considered to calculate PRS using MTG2.
import os
import pandas as pd
import numpy as np
import sys
filedirec = sys.argv[1]
filedirec = "SampleData1"
#filedirec = "asthma_19"
#filedirec = "migraine_0"
def check_phenotype_is_binary_or_continous(filedirec):
# Read the processed quality controlled file for a phenotype
df = pd.read_csv(filedirec+os.sep+filedirec+'_QC.fam',sep="\s+",header=None)
column_values = df[5].unique()
if len(set(column_values)) == 2:
return "Binary"
else:
return "Continous"
# Read the GWAS file.
GWAS = filedirec + os.sep + filedirec+".gz"
df = pd.read_csv(GWAS,compression= "gzip",sep="\s+")
if "BETA" in df.columns.to_list():
# For Continous Phenotype.
df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'BETA', 'INFO', 'MAF']]
else:
df["BETA"] = np.log(df["OR"])
df = df[['CHR', 'BP', 'SNP', 'A1', 'A2', 'N', 'SE', 'P', 'BETA', 'INFO', 'MAF']]
df['Z'] = df['BETA'] / df['SE']
transformed_df = df[['SNP', 'N', 'Z', 'A1', 'A2']].copy()
transformed_df.columns = ['SNP', 'N', 'Z', 'INC_ALLELE', 'DEC_ALLELE']
transformed_df.to_csv(filedirec + os.sep +"mtg2.txt",sep="\t",index=False)
print(transformed_df.head().to_markdown())
print("Length of DataFrame!",len(transformed_df))
| | SNP | N | Z | INC_ALLELE | DEC_ALLELE |
|---:|:-----------|-------:|----------:|:-------------|:-------------|
| 0 | rs3131962 | 388028 | -0.701213 | A | G |
| 1 | rs12562034 | 388028 | 0.20854 | A | G |
| 2 | rs4040617 | 388028 | -0.790957 | G | A |
| 3 | rs79373928 | 388028 | 0.241718 | G | T |
| 4 | rs11240779 | 388028 | 0.53845 | G | A |
Length of DataFrame! 499617
Plink Hyperparameters#
Plink is a tool that allows us to perform clumping and pruning. It also lets us specify the p-value thresholds used on the training data. For each combination of clumping, pruning, and p-value thresholds, a polygenic risk code is generated for each person. Plink takes beta coefficients or OR ratios from the GWAS file without re-estimating those values. Clumping and pruning are performed on the training data using the specified p-value thresholds. The same remaining number of SNPs from the test set is then used to estimate the polygenic risk scores. No separate clumping and pruning are required on the test set.
Details about clumping can be found here, and information about pruning is available here. P-value threshold documentation can be found [here](https://www.cog-genomics.org/plink/2.0/score#:~:text=–q-score-range can,in the third column%2C e.g., for the scorecard.
Pruning Parameters#
Informs Plink that we wish to perform pruning with a window size of 200 variants, sliding across the genome with a step size of 50 variants at a time, and filter out any SNPs with LD ( r^2 ) higher than 0.25.
1. p_window_size = [200]
2. p_slide_size = [50]
3. p_LD_threshold = [0.25]
Clumping Parameters#
The P-value threshold for an SNP to be included. 1 means to include all SNPs for clumping. SNPs having ( r^2 ) higher than 0.1 with the index SNPs will be removed. SNPs within 200k of the index SNP are considered for clumping.
1. clump_p1 = [1]
2. clump_r2 = [0.1]
3. clump_kb = [200]
Score Parameters#
–q-score-range can be used to apply –score too many variants subsets at once, based on, e.g., p-value ranges.
The “range file” should have range labels in the first column, p-value lower bounds in the second column, and upper bounds in the third column, e.g.
1. pv_1 0.00 0.01
2. pv_2 0.00 0.20
PCA#
Pca also affects the results evident from the initial analysis; however, including more PCA overfits the model.
Kindly note that the number of p-values to be considered varies, and the actual p-value also depends on the dataset. Moreover, after clumping, pruning, and p-value threshold, the number of SNPs in each fold can vary.
from operator import index
import pandas as pd
import numpy as np
import os
import subprocess
import sys
import pandas as pd
import statsmodels.api as sm
import pandas as pd
from sklearn.metrics import roc_auc_score, confusion_matrix
from statsmodels.stats.contingency_tables import mcnemar
def create_directory(directory):
"""Function to create a directory if it doesn't exist."""
if not os.path.exists(directory): # Checking if the directory doesn't exist
os.makedirs(directory) # Creating the directory if it doesn't exist
return directory # Returning the created or existing directory
#foldnumber = sys.argv[1]
foldnumber = "0" # Setting 'foldnumber' to "0"
folddirec = filedirec + os.sep + "Fold_" + foldnumber # Creating a directory path for the specific fold
trainfilename = "train_data" # Setting the name of the training data file
newtrainfilename = "train_data.QC" # Setting the name of the new training data file
testfilename = "test_data" # Setting the name of the test data file
newtestfilename = "test_data.QC" # Setting the name of the new test data file
# Number of PCA to be included as a covariate.
numberofpca = ["6"] # Setting the number of PCA components to be included
# Clumping parameters.
clump_p1 = [1] # List containing clump parameter 'p1'
clump_r2 = [0.1] # List containing clump parameter 'r2'
clump_kb = [200] # List containing clump parameter 'kb'
# Pruning parameters.
p_window_size = [200] # List containing pruning parameter 'window_size'
p_slide_size = [50] # List containing pruning parameter 'slide_size'
p_LD_threshold = [0.25] # List containing pruning parameter 'LD_threshold'
# Kindly note that the number of p-values to be considered varies, and the actual p-value depends on the dataset as well.
# We will specify the range list here.
minimumpvalue = 10 # Minimum p-value in exponent
numberofintervals = 20 # Number of intervals to be considered
allpvalues = np.logspace(-minimumpvalue, 0, numberofintervals, endpoint=True) # Generating an array of logarithmically spaced p-values
count = 1
with open(folddirec + os.sep + 'range_list', 'w') as file:
for value in allpvalues:
file.write(f'pv_{value} 0 {value}\n') # Writing range information to the 'range_list' file
count = count + 1
pvaluefile = folddirec + os.sep + 'range_list'
# Initializing an empty DataFrame with specified column names
prs_result = pd.DataFrame(columns=["clump_p1", "clump_r2", "clump_kb", "p_window_size", "p_slide_size", "p_LD_threshold",
"pvalue", "numberofpca","numberofvariants","Train_pure_prs", "Train_null_model", "Train_best_model",
"Test_pure_prs", "Test_null_model", "Test_best_model"])
Define Helper Functions#
Perform Clumping and Pruning
Calculate PCA Using Plink
Fit Binary Phenotype and Save Results
Fit Continuous Phenotype and Save Results
import os
import subprocess
import pandas as pd
import statsmodels.api as sm
from sklearn.metrics import explained_variance_score
def perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,numberofpca, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
command = [
"./plink",
"--bfile", traindirec+os.sep+newtrainfilename,
"--indep-pairwise", p1_val, p2_val, p3_val,
"--out", traindirec+os.sep+trainfilename
]
subprocess.run(command)
# First perform pruning and then clumping and the pruning.
command = [
"./plink",
"--bfile", traindirec+os.sep+newtrainfilename,
"--clump-p1", c1_val,
"--extract", traindirec+os.sep+trainfilename+".prune.in",
"--clump-r2", c2_val,
"--clump-kb", c3_val,
"--clump", filedirec+os.sep+filedirec+".txt",
"--clump-snp-field", "SNP",
"--clump-field", "P",
"--out", traindirec+os.sep+trainfilename
]
subprocess.run(command)
# Extract the valid SNPs from th clumped file.
# For windows download gwak for linux awk commmand is sufficient.
### For windows require GWAK.
### https://sourceforge.net/projects/gnuwin32/
##3 Get it and place it in the same direc.
#os.system("gawk "+"\""+"NR!=1{print $3}"+"\" "+ traindirec+os.sep+trainfilename+".clumped > "+traindirec+os.sep+trainfilename+".valid.snp")
#print("gawk "+"\""+"NR!=1{print $3}"+"\" "+ traindirec+os.sep+trainfilename+".clumped > "+traindirec+os.sep+trainfilename+".valid.snp")
#Linux:
command = f"awk 'NR!=1{{print $3}}' {traindirec}{os.sep}{trainfilename}.clumped > {traindirec}{os.sep}{trainfilename}.valid.snp"
os.system(command)
command = [
"./plink",
"--make-bed",
"--bfile", traindirec+os.sep+newtrainfilename,
"--indep-pairwise", p1_val, p2_val, p3_val,
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--out", traindirec+os.sep+newtrainfilename+".clumped.pruned"
]
subprocess.run(command)
command = [
"./plink",
"--make-bed",
"--bfile", traindirec+os.sep+testfilename,
"--indep-pairwise", p1_val, p2_val, p3_val,
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--out", traindirec+os.sep+testfilename+".clumped.pruned"
]
subprocess.run(command)
def calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p):
# Calculate the PRS for the test data using the same set of SNPs and also calculate the PCA.
# Also extract the PCA at this point.
# PCA are calculated afer clumping and pruining.
command = [
"./plink",
"--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
# Select the final variants after clumping and pruning.
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--pca", p,
"--out", folddirec+os.sep+testfilename
]
subprocess.run(command)
command = [
"./plink",
"--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
# Select the final variants after clumping and pruning.
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--pca", p,
"--out", traindirec+os.sep+trainfilename
]
subprocess.run(command)
# This function fit the binary model on the PRS.
def fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,sblupmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
threshold_values = allpvalues
# Merge the covariates, pca and phenotypes.
tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_train = pd.DataFrame()
phenotype_train["Phenotype"] = tempphenotype_train[5].values
pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
covariate_train.fillna(0, inplace=True)
covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
covariate_train['FID'] = covariate_train['FID'].astype(str)
pcs_train['FID'] = pcs_train['FID'].astype(str)
covariate_train['IID'] = covariate_train['IID'].astype(str)
pcs_train['IID'] = pcs_train['IID'].astype(str)
covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
covandpcs_train.fillna(0, inplace=True)
## Scale the covariates!
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import explained_variance_score
scaler = MinMaxScaler()
normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
#covandpcs_train.iloc[:, 2:] = normalized_values_test
tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_test= pd.DataFrame()
phenotype_test["Phenotype"] = tempphenotype_test[5].values
pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
covariate_test.fillna(0, inplace=True)
covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
covariate_test['FID'] = covariate_test['FID'].astype(str)
pcs_test['FID'] = pcs_test['FID'].astype(str)
covariate_test['IID'] = covariate_test['IID'].astype(str)
pcs_test['IID'] = pcs_test['IID'].astype(str)
covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
covandpcs_test.fillna(0, inplace=True)
normalized_values_test = scaler.transform(covandpcs_test.iloc[:, 2:])
#covandpcs_test.iloc[:, 2:] = normalized_values_test
tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
tempalphas = [0.1]
l1weights = [0.1]
phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1})
phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
for tempalpha in tempalphas:
for l1weight in l1weights:
try:
null_model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
#null_model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
except:
print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
continue
train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
from sklearn.metrics import roc_auc_score, confusion_matrix
from sklearn.metrics import r2_score
test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
global prs_result
for i in threshold_values:
try:
prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
except:
continue
prs_train['FID'] = prs_train['FID'].astype(str)
prs_train['IID'] = prs_train['IID'].astype(str)
try:
prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
except:
continue
prs_test['FID'] = prs_test['FID'].astype(str)
prs_test['IID'] = prs_test['IID'].astype(str)
pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
try:
model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
#model = sm.Logit(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
except:
continue
train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))
test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:]))
from sklearn.metrics import roc_auc_score, confusion_matrix
prs_result = prs_result._append({
"clump_p1": c1_val,
"clump_r2": c2_val,
"clump_kb": c3_val,
"p_window_size": p1_val,
"p_slide_size": p2_val,
"p_LD_threshold": p3_val,
"pvalue": i,
"numberofpca":p,
"sblupmodel":sblupmodel,
"tempalpha":str(tempalpha),
"l1weight":str(l1weight),
"Train_pure_prs":roc_auc_score(phenotype_train["Phenotype"].values,prs_train['SCORE'].values),
"Train_null_model":roc_auc_score(phenotype_train["Phenotype"].values,train_null_predicted.values),
"Train_best_model":roc_auc_score(phenotype_train["Phenotype"].values,train_best_predicted.values),
"Test_pure_prs":roc_auc_score(phenotype_test["Phenotype"].values,prs_test['SCORE'].values),
"Test_null_model":roc_auc_score(phenotype_test["Phenotype"].values,test_null_predicted.values),
"Test_best_model":roc_auc_score(phenotype_test["Phenotype"].values,test_best_predicted.values),
}, ignore_index=True)
prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
return
# This function fit the binary model on the PRS.
def fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p, sblupmodel,p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
threshold_values = allpvalues
# Merge the covariates, pca and phenotypes.
tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_train = pd.DataFrame()
phenotype_train["Phenotype"] = tempphenotype_train[5].values
pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
covariate_train.fillna(0, inplace=True)
covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
covariate_train['FID'] = covariate_train['FID'].astype(str)
pcs_train['FID'] = pcs_train['FID'].astype(str)
covariate_train['IID'] = covariate_train['IID'].astype(str)
pcs_train['IID'] = pcs_train['IID'].astype(str)
covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
covandpcs_train.fillna(0, inplace=True)
## Scale the covariates!
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import explained_variance_score
scaler = MinMaxScaler()
normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
#covandpcs_train.iloc[:, 2:] = normalized_values_test
tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_test= pd.DataFrame()
phenotype_test["Phenotype"] = tempphenotype_test[5].values
pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
covariate_test.fillna(0, inplace=True)
covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
covariate_test['FID'] = covariate_test['FID'].astype(str)
pcs_test['FID'] = pcs_test['FID'].astype(str)
covariate_test['IID'] = covariate_test['IID'].astype(str)
pcs_test['IID'] = pcs_test['IID'].astype(str)
covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
covandpcs_test.fillna(0, inplace=True)
normalized_values_test = scaler.transform(covandpcs_test.iloc[:, 2:])
#covandpcs_test.iloc[:, 2:] = normalized_values_test
tempalphas = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
l1weights = [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9]
tempalphas = [0.1]
l1weights = [0.1]
#phenotype_train["Phenotype"] = phenotype_train["Phenotype"].replace({1: 0, 2: 1})
#phenotype_test["Phenotype"] = phenotype_test["Phenotype"].replace({1: 0, 2: 1})
for tempalpha in tempalphas:
for l1weight in l1weights:
try:
#null_model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
null_model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
#null_model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(covandpcs_train.iloc[:, 2:])).fit()
except:
print("XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX")
continue
train_null_predicted = null_model.predict(sm.add_constant(covandpcs_train.iloc[:, 2:]))
from sklearn.metrics import roc_auc_score, confusion_matrix
from sklearn.metrics import r2_score
test_null_predicted = null_model.predict(sm.add_constant(covandpcs_test.iloc[:, 2:]))
global prs_result
for i in threshold_values:
try:
prs_train = pd.read_table(traindirec+os.sep+Name+os.sep+"train_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
except:
continue
prs_train['FID'] = prs_train['FID'].astype(str)
prs_train['IID'] = prs_train['IID'].astype(str)
try:
prs_test = pd.read_table(traindirec+os.sep+Name+os.sep+"test_data.pv_"+f"{i}.profile", sep="\s+", usecols=["FID", "IID", "SCORE"])
except:
continue
prs_test['FID'] = prs_test['FID'].astype(str)
prs_test['IID'] = prs_test['IID'].astype(str)
pheno_prs_train = pd.merge(covandpcs_train, prs_train, on=["FID", "IID"])
pheno_prs_test = pd.merge(covandpcs_test, prs_test, on=["FID", "IID"])
try:
#model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit_regularized(alpha=tempalpha, L1_wt=l1weight)
model = sm.OLS(phenotype_train["Phenotype"], sm.add_constant(pheno_prs_train.iloc[:, 2:])).fit()
except:
continue
train_best_predicted = model.predict(sm.add_constant(pheno_prs_train.iloc[:, 2:]))
test_best_predicted = model.predict(sm.add_constant(pheno_prs_test.iloc[:, 2:]))
from sklearn.metrics import roc_auc_score, confusion_matrix
prs_result = prs_result._append({
"clump_p1": c1_val,
"clump_r2": c2_val,
"clump_kb": c3_val,
"p_window_size": p1_val,
"p_slide_size": p2_val,
"p_LD_threshold": p3_val,
"pvalue": i,
"numberofpca":p,
"sblupmodel":sblupmodel,
"tempalpha":str(tempalpha),
"l1weight":str(l1weight),
"numberofvariants": len(pd.read_csv(traindirec+os.sep+newtrainfilename+".clumped.pruned.bim")),
"Train_pure_prs":explained_variance_score(phenotype_train["Phenotype"],prs_train['SCORE'].values),
"Train_null_model":explained_variance_score(phenotype_train["Phenotype"],train_null_predicted),
"Train_best_model":explained_variance_score(phenotype_train["Phenotype"],train_best_predicted),
"Test_pure_prs":explained_variance_score(phenotype_test["Phenotype"],prs_test['SCORE'].values),
"Test_null_model":explained_variance_score(phenotype_test["Phenotype"],test_null_predicted),
"Test_best_model":explained_variance_score(phenotype_test["Phenotype"],test_best_predicted),
}, ignore_index=True)
prs_result.to_csv(traindirec+os.sep+Name+os.sep+"Results.csv",index=False)
return
Execute MTG2#
# Define a global variable to store results
prs_result = pd.DataFrame()
def transform_mtg2_data(traindirec, newtrainfilename,p,sblupmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile):
### First perform clumping on the file and save the clumpled file.
#perform_clumping_and_pruning_on_individual_data(traindirec, newtrainfilename,p, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
#newtrainfilename = newtrainfilename+".clumped.pruned"
#testfilename = testfilename+".clumped.pruned"
#clupmedfile = traindirec+os.sep+newtrainfilename+".clump"
#prunedfile = traindirec+os.sep+newtrainfilename+".clumped.pruned"
# Also extract the PCA at this point for both test and training data.
#calculate_pca_for_traindata_testdata_for_clumped_pruned_snps(traindirec, newtrainfilename,p)
#Extract p-values from the GWAS file.
# Command for Linux.
os.system("awk "+"\'"+"{print $3,$8}"+"\'"+" ./"+filedirec+os.sep+filedirec+".txt > ./"+traindirec+os.sep+"SNP.pvalue")
# Command for windows.
### For windows get GWAK.
### https://sourceforge.net/projects/gnuwin32/
##3 Get it and place it in the same direc.
#os.system("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt > ./"+traindirec+os.sep+"SNP.pvalue")
#print("gawk "+"\""+"{print $3,$8}"+"\""+" ./"+filedirec+os.sep+filedirec+".txt > ./"+traindirec+os.sep+"SNP.pvalue")
#exit(0)
# Generate covarie file required by COV_PCA_MTG2
# Merge the covariates, pca and phenotypes.
tempphenotype_train = pd.read_table(traindirec+os.sep+newtrainfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_train = pd.DataFrame()
phenotype_train[["FID", "IID", "t1"]] = tempphenotype_train.iloc[:, [0, 1, 5]].values
phenotype_train.to_csv(traindirec+os.sep+newtrainfilename+".dat",sep="\t",index=False,header=False)
pcs_train = pd.read_table(traindirec+os.sep+trainfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_train = pd.read_table(traindirec+os.sep+trainfilename+".cov",sep="\s+")
covariate_train.fillna(0, inplace=True)
covariate_train = covariate_train[covariate_train["FID"].isin(pcs_train["FID"].values) & covariate_train["IID"].isin(pcs_train["IID"].values)]
covariate_train['FID'] = covariate_train['FID'].astype(str)
pcs_train['FID'] = pcs_train['FID'].astype(str)
covariate_train['IID'] = covariate_train['IID'].astype(str)
pcs_train['IID'] = pcs_train['IID'].astype(str)
covandpcs_train = pd.merge(covariate_train, pcs_train, on=["FID","IID"])
covandpcs_train.fillna(0, inplace=True)
covandpcs_train.to_csv(traindirec+os.sep+trainfilename+".COV_PCA",sep="\t",index=False)
covandpcs_train.to_csv(traindirec+os.sep+trainfilename+".COV_PCA_MTG2",sep="\t",index=False,header=False)
covandpcs_train.iloc[:, 2:].to_csv(traindirec+os.sep+trainfilename+".COV_PCAgemma", header=False, index=False,sep="\t")
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import explained_variance_score
scaler = MinMaxScaler()
normalized_values_train = scaler.fit_transform(covandpcs_train.iloc[:, 2:])
tempphenotype_test = pd.read_table(traindirec+os.sep+testfilename+".clumped.pruned"+".fam", sep="\s+",header=None)
phenotype_test= pd.DataFrame()
phenotype_test[["FID", "IID", "t1"]] = tempphenotype_test.iloc[:, [0, 1, 5]].values
phenotype_test.to_csv(traindirec+os.sep+testfilename+".dat",sep="\t",index=False,header=False)
pcs_test = pd.read_table(traindirec+os.sep+testfilename+".eigenvec", sep="\s+",header=None, names=["FID", "IID"] + [f"PC{str(i)}" for i in range(1, int(p)+1)])
covariate_test = pd.read_table(traindirec+os.sep+testfilename+".cov",sep="\s+")
covariate_test.fillna(0, inplace=True)
covariate_test = covariate_test[covariate_test["FID"].isin(pcs_test["FID"].values) & covariate_test["IID"].isin(pcs_test["IID"].values)]
covariate_test['FID'] = covariate_test['FID'].astype(str)
pcs_test['FID'] = pcs_test['FID'].astype(str)
covariate_test['IID'] = covariate_test['IID'].astype(str)
pcs_test['IID'] = pcs_test['IID'].astype(str)
covandpcs_test = pd.merge(covariate_test, pcs_test, on=["FID","IID"])
covandpcs_test.fillna(0, inplace=True)
normalized_values_test = scaler.transform(covandpcs_test.iloc[:, 2:])
covandpcs_test.to_csv(traindirec+os.sep+testfilename+".COV_PCA",sep="\t",index=False)
covandpcs_test.to_csv(traindirec+os.sep+testfilename+".COV_PCA_MTG2",sep="\t",index=False,header=False)
covandpcs_test.iloc[:, 2:].to_csv(traindirec+os.sep+testfilename+".COV_PCAgemma", header=False, index=False,sep="\t")
import glob
def delete_files_with_prefix(directory, filename_prefix,prefix):
"""
Deletes all files in the specified directory that start with the given filename prefix.
Parameters:
directory (str): The directory where the files are located.
filename_prefix (str): The prefix of the filenames to be deleted.
"""
# Construct the full file prefix
file_prefix = os.path.join(directory, filename_prefix + prefix)
# Find all files that match the prefix
files_to_delete = glob.glob(file_prefix + "*")
# Delete each file
for file in files_to_delete:
try:
os.remove(file)
print(f"Deleted: {file}")
except Exception as e:
print(f"Failed to delete {file}: {e}")
# Delete the files generated in the previous iteration.
delete_files_with_prefix(traindirec, newtrainfilename,"mtg2temp")
delete_files_with_prefix(traindirec, newtrainfilename,"MTG_SCORE")
delete_files_with_prefix(traindirec, newtrainfilename,"MTG_GWAS")
delete_files_with_prefix(traindirec, newtrainfilename,"train_data.QCmtgtmp")
# First make grm using Plink.
command = [
"./plink",
"--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
"--make-grm-bin",
"--out", traindirec+os.sep+newtrainfilename+"tempgrm"
]
print(" ".join(command))
subprocess.run(command)
# Generate a set of specific files required by MTG
command = [
"./mtg2",
"-p", traindirec+os.sep+newtrainfilename+ ".clumped.pruned"+".fam",
"-bg",traindirec+os.sep+newtrainfilename+"tempgrm.grm.bin",
"-d", traindirec+os.sep+newtrainfilename+".dat",
"-bv",traindirec+os.sep+newtrainfilename+"mtg2temp.bv",
"-sv",traindirec+os.sep+newtrainfilename+"mtg2temp.sv",
"-bvr", traindirec+os.sep+newtrainfilename+"mtg2temp.bvr",
#"-eig", grm_file,
#"-cc", class_cov_file,
#"-sbv", sblupmodel,
"-qc", traindirec+os.sep+trainfilename+".COV_PCA_MTG2",
"-out", traindirec+os.sep+newtrainfilename+"mtg2temp",
#"-sv", start_value_file,
"-mod", str(1)
]
# Run the command
print(" ".join(command))
subprocess.run(command)
command = [
"./mtg2",
"-plink", traindirec+os.sep+newtrainfilename+".clumped.pruned",
"-frq","1",
"-out", traindirec+os.sep+newtrainfilename
]
# Run the command
print(" ".join(command))
subprocess.run(command)
command = [
"./mtg2",
"-plink", traindirec+os.sep+newtrainfilename+".clumped.pruned",
"-vgpy", traindirec+os.sep+newtrainfilename + "mtg2temp",
"-sbv", sblupmodel,
"-out", traindirec+os.sep+newtrainfilename+"mtg2temp"
]
# Run the command
print(" ".join(command))
#subprocess.run(command)
#raise
t_value = "1"
input_file = traindirec+os.sep+newtrainfilename+"mtg2temp.bv.py"
tmp_file = traindirec+os.sep+newtrainfilename+"mtgtmp"
awk_command = f"awk '$1=={t_value} {{print $2}}' {input_file} > {tmp_file}"
subprocess.run(awk_command, shell=True)
mtg2_command = [
"./mtg2",
"-plink", traindirec+os.sep+newtrainfilename+".clumped.pruned",
"-vgpy", tmp_file,
"-sbv", sblupmodel,
"-out", traindirec+os.sep+newtrainfilename+"MTG_SCORE"
]
# Run the mtg2 command
print(" ".join(mtg2_command))
subprocess.run(mtg2_command)
temp = pd.read_csv( traindirec+os.sep+newtrainfilename+"MTG_SCORE",sep="\s+",header=None)
print(temp.head())
if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
temp[2] = np.log(temp[2])
else:
pass
temp[2] = temp[2].replace([np.inf, -np.inf], np.nan) # Replace inf and -inf with NaN
temp[2] = temp[2].fillna(0)
print(temp.head())
temp.to_csv(traindirec+os.sep+newtrainfilename+"MTG_GWAS",sep="\t",index=False)
command = [
"./plink",
"--bfile", traindirec+os.sep+newtrainfilename+".clumped.pruned",
### SNP column = 3, Effect allele column 1 = 4, OR column=9
"--score", traindirec+os.sep+newtrainfilename+"MTG_GWAS", "1", "2", "3", "header",
"--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--out", traindirec+os.sep+Name+os.sep+trainfilename
]
#exit(0)
subprocess.run(command)
command = [
"./plink",
"--bfile", folddirec+os.sep+testfilename+".clumped.pruned",
### SNP column = 3, Effect allele column 1 = 4, Beta column=12
"--score", traindirec+os.sep+newtrainfilename+"MTG_GWAS", "1", "2", "3", "header",
"--q-score-range", traindirec+os.sep+"range_list",traindirec+os.sep+"SNP.pvalue",
"--extract", traindirec+os.sep+trainfilename+".valid.snp",
"--out", folddirec+os.sep+Name+os.sep+testfilename
]
subprocess.run(command)
if check_phenotype_is_binary_or_continous(filedirec)=="Binary":
print("Binary Phenotype!")
fit_binary_phenotype_on_PRS(traindirec, newtrainfilename,p,sblupmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
else:
print("Continous Phenotype!")
fit_continous_phenotype_on_PRS(traindirec, newtrainfilename,p,sblupmodel, p1_val, p2_val, p3_val, c1_val, c2_val, c3_val,Name,pvaluefile)
result_directory = "MTG2"
sblupmodels = ['a' ]
# Nested loops to iterate over different parameter values
create_directory(folddirec+os.sep+result_directory)
for p1_val in p_window_size:
for p2_val in p_slide_size:
for p3_val in p_LD_threshold:
for c1_val in clump_p1:
for c2_val in clump_r2:
for c3_val in clump_kb:
for p in numberofpca:
for sblupmodel in sblupmodels:
transform_mtg2_data(folddirec, newtrainfilename, p,sblupmodel, str(p1_val), str(p2_val), str(p3_val), str(c1_val), str(c2_val), str(c3_val), result_directory, pvaluefile)
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bv
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bvr.py
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bvr.fsl
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bvr
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bv.fsl
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bvr.r2
Deleted: SampleData1/Fold_0/train_data.QCmtg2temp.bv.py
Deleted: SampleData1/Fold_0/train_data.QCMTG_SCORE
Deleted: SampleData1/Fold_0/train_data.QCMTG_GWAS
./plink --bfile SampleData1/Fold_0/train_data.QC.clumped.pruned --make-grm-bin --out SampleData1/Fold_0/train_data.QCtempgrm
PLINK v1.90b7.2 64-bit (11 Dec 2023) www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang GNU General Public License v3
Logging to SampleData1/Fold_0/train_data.QCtempgrm.log.
Options in effect:
--bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
--make-grm-bin
--out SampleData1/Fold_0/train_data.QCtempgrm
63761 MB RAM detected; reserving 31880 MB for main workspace.
38646 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
Using up to 8 threads (change this with --threads).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99% done.
Total genotyping rate is exactly 1.
38646 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
60 markers complete.
120 markers complete.
180 markers complete.
240 markers complete.
300 markers complete.
360 markers complete.
420 markers complete.
480 markers complete.
540 markers complete.
600 markers complete.
660 markers complete.
720 markers complete.
780 markers complete.
840 markers complete.
900 markers complete.
960 markers complete.
1020 markers complete.
1080 markers complete.
1140 markers complete.
1200 markers complete.
1260 markers complete.
1320 markers complete.
1380 markers complete.
1440 markers complete.
1500 markers complete.
1560 markers complete.
1620 markers complete.
1680 markers complete.
1740 markers complete.
1800 markers complete.
1860 markers complete.
1920 markers complete.
1980 markers complete.
2040 markers complete.
2100 markers complete.
2160 markers complete.
2220 markers complete.
2280 markers complete.
2340 markers complete.
2400 markers complete.
2460 markers complete.
2520 markers complete.
2580 markers complete.
2640 markers complete.
2700 markers complete.
2760 markers complete.
2820 markers complete.
2880 markers complete.
2940 markers complete.
3000 markers complete.
3060 markers complete.
3120 markers complete.
3180 markers complete.
3240 markers complete.
3300 markers complete.
3360 markers complete.
3420 markers complete.
3480 markers complete.
3540 markers complete.
3600 markers complete.
3660 markers complete.
3720 markers complete.
3780 markers complete.
3840 markers complete.
3900 markers complete.
3960 markers complete.
4020 markers complete.
4080 markers complete.
4140 markers complete.
4200 markers complete.
4260 markers complete.
4320 markers complete.
4380 markers complete.
4440 markers complete.
4500 markers complete.
4560 markers complete.
4620 markers complete.
4680 markers complete.
4740 markers complete.
4800 markers complete.
4860 markers complete.
4920 markers complete.
4980 markers complete.
5040 markers complete.
5100 markers complete.
5160 markers complete.
5220 markers complete.
5280 markers complete.
5340 markers complete.
5400 markers complete.
5460 markers complete.
5520 markers complete.
5580 markers complete.
5640 markers complete.
5700 markers complete.
5760 markers complete.
5820 markers complete.
5880 markers complete.
5940 markers complete.
6000 markers complete.
6060 markers complete.
6120 markers complete.
6180 markers complete.
6240 markers complete.
6300 markers complete.
6360 markers complete.
6420 markers complete.
6480 markers complete.
6540 markers complete.
6600 markers complete.
6660 markers complete.
6720 markers complete.
6780 markers complete.
6840 markers complete.
6900 markers complete.
6960 markers complete.
7020 markers complete.
7080 markers complete.
7140 markers complete.
7200 markers complete.
7260 markers complete.
7320 markers complete.
7380 markers complete.
7440 markers complete.
7500 markers complete.
7560 markers complete.
7620 markers complete.
7680 markers complete.
7740 markers complete.
7800 markers complete.
7860 markers complete.
7920 markers complete.
7980 markers complete.
8040 markers complete.
8100 markers complete.
8160 markers complete.
8220 markers complete.
8280 markers complete.
8340 markers complete.
8400 markers complete.
8460 markers complete.
8520 markers complete.
8580 markers complete.
8640 markers complete.
8700 markers complete.
8760 markers complete.
8820 markers complete.
8880 markers complete.
8940 markers complete.
9000 markers complete.
9060 markers complete.
9120 markers complete.
9180 markers complete.
9240 markers complete.
9300 markers complete.
9360 markers complete.
9420 markers complete.
9480 markers complete.
9540 markers complete.
9600 markers complete.
9660 markers complete.
9720 markers complete.
9780 markers complete.
9840 markers complete.
9900 markers complete.
9960 markers complete.
10020 markers complete.
10080 markers complete.
10140 markers complete.
10200 markers complete.
10260 markers complete.
10320 markers complete.
10380 markers complete.
10440 markers complete.
10500 markers complete.
10560 markers complete.
10620 markers complete.
10680 markers complete.
10740 markers complete.
10800 markers complete.
10860 markers complete.
10920 markers complete.
10980 markers complete.
11040 markers complete.
11100 markers complete.
11160 markers complete.
11220 markers complete.
11280 markers complete.
11340 markers complete.
11400 markers complete.
11460 markers complete.
11520 markers complete.
11580 markers complete.
11640 markers complete.
11700 markers complete.
11760 markers complete.
11820 markers complete.
11880 markers complete.
11940 markers complete.
12000 markers complete.
12060 markers complete.
12120 markers complete.
12180 markers complete.
12240 markers complete.
12300 markers complete.
12360 markers complete.
12420 markers complete.
12480 markers complete.
12540 markers complete.
12600 markers complete.
12660 markers complete.
12720 markers complete.
12780 markers complete.
12840 markers complete.
12900 markers complete.
12960 markers complete.
13020 markers complete.
13080 markers complete.
13140 markers complete.
13200 markers complete.
13260 markers complete.
13320 markers complete.
13380 markers complete.
13440 markers complete.
13500 markers complete.
13560 markers complete.
13620 markers complete.
13680 markers complete.
13740 markers complete.
13800 markers complete.
13860 markers complete.
13920 markers complete.
13980 markers complete.
14040 markers complete.
14100 markers complete.
14160 markers complete.
14220 markers complete.
14280 markers complete.
14340 markers complete.
14400 markers complete.
14460 markers complete.
14520 markers complete.
14580 markers complete.
14640 markers complete.
14700 markers complete.
14760 markers complete.
14820 markers complete.
14880 markers complete.
14940 markers complete.
15000 markers complete.
15060 markers complete.
15120 markers complete.
15180 markers complete.
15240 markers complete.
15300 markers complete.
15360 markers complete.
15420 markers complete.
15480 markers complete.
15540 markers complete.
15600 markers complete.
15660 markers complete.
15720 markers complete.
15780 markers complete.
15840 markers complete.
15900 markers complete.
15960 markers complete.
16020 markers complete.
16080 markers complete.
16140 markers complete.
16200 markers complete.
16260 markers complete.
16320 markers complete.
16380 markers complete.
16440 markers complete.
16500 markers complete.
16560 markers complete.
16620 markers complete.
16680 markers complete.
16740 markers complete.
16800 markers complete.
16860 markers complete.
16920 markers complete.
16980 markers complete.
17040 markers complete.
17100 markers complete.
17160 markers complete.
17220 markers complete.
17280 markers complete.
17340 markers complete.
17400 markers complete.
17460 markers complete.
17520 markers complete.
17580 markers complete.
17640 markers complete.
17700 markers complete.
17760 markers complete.
17820 markers complete.
17880 markers complete.
17940 markers complete.
18000 markers complete.
18060 markers complete.
18120 markers complete.
18180 markers complete.
18240 markers complete.
18300 markers complete.
18360 markers complete.
18420 markers complete.
18480 markers complete.
18540 markers complete.
18600 markers complete.
18660 markers complete.
18720 markers complete.
18780 markers complete.
18840 markers complete.
18900 markers complete.
18960 markers complete.
19020 markers complete.
19080 markers complete.
19140 markers complete.
19200 markers complete.
19260 markers complete.
19320 markers complete.
19380 markers complete.
19440 markers complete.
19500 markers complete.
19560 markers complete.
19620 markers complete.
19680 markers complete.
19740 markers complete.
19800 markers complete.
19860 markers complete.
19920 markers complete.
19980 markers complete.
20040 markers complete.
20100 markers complete.
20160 markers complete.
20220 markers complete.
20280 markers complete.
20340 markers complete.
20400 markers complete.
20460 markers complete.
20520 markers complete.
20580 markers complete.
20640 markers complete.
20700 markers complete.
20760 markers complete.
20820 markers complete.
20880 markers complete.
20940 markers complete.
21000 markers complete.
21060 markers complete.
21120 markers complete.
21180 markers complete.
21240 markers complete.
21300 markers complete.
21360 markers complete.
21420 markers complete.
21480 markers complete.
21540 markers complete.
21600 markers complete.
21660 markers complete.
21720 markers complete.
21780 markers complete.
21840 markers complete.
21900 markers complete.
21960 markers complete.
22020 markers complete.
22080 markers complete.
22140 markers complete.
22200 markers complete.
22260 markers complete.
22320 markers complete.
22380 markers complete.
22440 markers complete.
22500 markers complete.
22560 markers complete.
22620 markers complete.
22680 markers complete.
22740 markers complete.
22800 markers complete.
22860 markers complete.
22920 markers complete.
22980 markers complete.
23040 markers complete.
23100 markers complete.
23160 markers complete.
23220 markers complete.
23280 markers complete.
23340 markers complete.
23400 markers complete.
23460 markers complete.
23520 markers complete.
23580 markers complete.
23640 markers complete.
23700 markers complete.
23760 markers complete.
23820 markers complete.
23880 markers complete.
23940 markers complete.
24000 markers complete.
24060 markers complete.
24120 markers complete.
24180 markers complete.
24240 markers complete.
24300 markers complete.
24360 markers complete.
24420 markers complete.
24480 markers complete.
24540 markers complete.
24600 markers complete.
24660 markers complete.
24720 markers complete.
24780 markers complete.
24840 markers complete.
24900 markers complete.
24960 markers complete.
25020 markers complete.
25080 markers complete.
25140 markers complete.
25200 markers complete.
25260 markers complete.
25320 markers complete.
25380 markers complete.
25440 markers complete.
25500 markers complete.
25560 markers complete.
25620 markers complete.
25680 markers complete.
25740 markers complete.
25800 markers complete.
25860 markers complete.
25920 markers complete.
25980 markers complete.
26040 markers complete.
26100 markers complete.
26160 markers complete.
26220 markers complete.
26280 markers complete.
26340 markers complete.
26400 markers complete.
26460 markers complete.
26520 markers complete.
26580 markers complete.
26640 markers complete.
26700 markers complete.
26760 markers complete.
26820 markers complete.
26880 markers complete.
26940 markers complete.
27000 markers complete.
27060 markers complete.
27120 markers complete.
27180 markers complete.
27240 markers complete.
27300 markers complete.
27360 markers complete.
27420 markers complete.
27480 markers complete.
27540 markers complete.
27600 markers complete.
27660 markers complete.
27720 markers complete.
27780 markers complete.
27840 markers complete.
27900 markers complete.
27960 markers complete.
28020 markers complete.
28080 markers complete.
28140 markers complete.
28200 markers complete.
28260 markers complete.
28320 markers complete.
28380 markers complete.
28440 markers complete.
28500 markers complete.
28560 markers complete.
28620 markers complete.
28680 markers complete.
28740 markers complete.
28800 markers complete.
28860 markers complete.
28920 markers complete.
28980 markers complete.
29040 markers complete.
29100 markers complete.
29160 markers complete.
29220 markers complete.
29280 markers complete.
29340 markers complete.
29400 markers complete.
29460 markers complete.
29520 markers complete.
29580 markers complete.
29640 markers complete.
29700 markers complete.
29760 markers complete.
29820 markers complete.
29880 markers complete.
29940 markers complete.
30000 markers complete.
30060 markers complete.
30120 markers complete.
30180 markers complete.
30240 markers complete.
30300 markers complete.
30360 markers complete.
30420 markers complete.
30480 markers complete.
30540 markers complete.
30600 markers complete.
30660 markers complete.
30720 markers complete.
30780 markers complete.
30840 markers complete.
30900 markers complete.
30960 markers complete.
31020 markers complete.
31080 markers complete.
31140 markers complete.
31200 markers complete.
31260 markers complete.
31320 markers complete.
31380 markers complete.
31440 markers complete.
31500 markers complete.
31560 markers complete.
31620 markers complete.
31680 markers complete.
31740 markers complete.
31800 markers complete.
31860 markers complete.
31920 markers complete.
31980 markers complete.
32040 markers complete.
32100 markers complete.
32160 markers complete.
32220 markers complete.
32280 markers complete.
32340 markers complete.
32400 markers complete.
32460 markers complete.
32520 markers complete.
32580 markers complete.
32640 markers complete.
32700 markers complete.
32760 markers complete.
32820 markers complete.
32880 markers complete.
32940 markers complete.
33000 markers complete.
33060 markers complete.
33120 markers complete.
33180 markers complete.
33240 markers complete.
33300 markers complete.
33360 markers complete.
33420 markers complete.
33480 markers complete.
33540 markers complete.
33600 markers complete.
33660 markers complete.
33720 markers complete.
33780 markers complete.
33840 markers complete.
33900 markers complete.
33960 markers complete.
34020 markers complete.
34080 markers complete.
34140 markers complete.
34200 markers complete.
34260 markers complete.
34320 markers complete.
34380 markers complete.
34440 markers complete.
34500 markers complete.
34560 markers complete.
34620 markers complete.
34680 markers complete.
34740 markers complete.
34800 markers complete.
34860 markers complete.
34920 markers complete.
34980 markers complete.
35040 markers complete.
35100 markers complete.
35160 markers complete.
35220 markers complete.
35280 markers complete.
35340 markers complete.
35400 markers complete.
35460 markers complete.
35520 markers complete.
35580 markers complete.
35640 markers complete.
35700 markers complete.
35760 markers complete.
35820 markers complete.
35880 markers complete.
35940 markers complete.
36000 markers complete.
36060 markers complete.
36120 markers complete.
36180 markers complete.
36240 markers complete.
36300 markers complete.
36360 markers complete.
36420 markers complete.
36480 markers complete.
36540 markers complete.
36600 markers complete.
36660 markers complete.
36720 markers complete.
36780 markers complete.
36840 markers complete.
36900 markers complete.
36960 markers complete.
37020 markers complete.
37080 markers complete.
37140 markers complete.
37200 markers complete.
37260 markers complete.
37320 markers complete.
37380 markers complete.
37440 markers complete.
37500 markers complete.
37560 markers complete.
37620 markers complete.
37680 markers complete.
37740 markers complete.
37800 markers complete.
37860 markers complete.
37920 markers complete.
37980 markers complete.
38040 markers complete.
38100 markers complete.
38160 markers complete.
38220 markers complete.
38280 markers complete.
38340 markers complete.
38400 markers complete.
38460 markers complete.
38520 markers complete.
38580 markers complete.
38640 markers complete.
38646 markers complete.
Relationship matrix calculation complete.
Writing... 1%
Writing... 2%
Writing... 3%
Writing... 4%
Writing... 5%
Writing... 6%
Writing... 7%
Writing... 8%
Writing... 9%
Writing... 10%
Writing... 11%
Writing... 12%
Writing... 13%
Writing... 14%
Writing... 15%
Writing... 16%
Writing... 17%
Writing... 18%
Writing... 19%
Writing... 20%
Writing... 21%
Writing... 22%
Writing... 23%
Writing... 24%
Writing... 25%
Writing... 26%
Writing... 27%
Writing... 28%
Writing... 29%
Writing... 30%
Writing... 31%
Writing... 32%
Writing... 33%
Writing... 34%
Writing... 35%
Writing... 36%
Writing... 37%
Writing... 38%
Writing... 39%
Writing... 40%
Writing... 41%
Writing... 42%
Writing... 43%
Writing... 44%
Writing... 45%
Writing... 46%
Writing... 47%
Writing... 48%
Writing... 49%
Writing... 50%
Writing... 51%
Writing... 52%
Writing... 53%
Writing... 54%
Writing... 55%
Writing... 56%
Writing... 57%
Writing... 58%
Writing... 59%
Writing... 60%
Writing... 61%
Writing... 62%
Writing... 63%
Writing... 64%
Writing... 65%
Writing... 66%
Writing... 67%
Writing... 68%
Writing... 69%
Writing... 70%
Writing... 71%
Writing... 72%
Writing... 73%
Writing... 74%
Writing... 75%
Writing... 76%
Writing... 77%
Writing... 78%
Writing... 79%
Writing... 80%
Writing... 81%
Writing... 82%
Writing... 83%
Writing... 84%
Writing... 85%
Writing... 86%
Writing... 87%
Writing... 88%
Writing... 89%
Writing... 90%
Writing... 91%
Writing... 92%
Writing... 93%
Writing... 94%
Writing... 95%
Writing... 96%
Writing... 97%
Writing... 98%
Writing... 99%
Relationship matrix written to SampleData1/Fold_0/train_data.QCtempgrm.grm.bin
, and IDs written to SampleData1/Fold_0/train_data.QCtempgrm.grm.id .
./mtg2 -p SampleData1/Fold_0/train_data.QC.clumped.pruned.fam -bg SampleData1/Fold_0/train_data.QCtempgrm.grm.bin -d SampleData1/Fold_0/train_data.QC.dat -bv SampleData1/Fold_0/train_data.QCmtg2temp.bv -sv SampleData1/Fold_0/train_data.QCmtg2temp.sv -bvr SampleData1/Fold_0/train_data.QCmtg2temp.bvr -qc SampleData1/Fold_0/train_data.COV_PCA_MTG2 -out SampleData1/Fold_0/train_data.QCmtg2temp -mod 1
******************************************************************
MTG2 version 2.22 (Oct2021)
******************************************************************
ID file : SampleData1/Fold_0/train_data.QC.clumped.pruned.fam
bgrm file : SampleData1/Fold_0/train_data.QCtempgrm.grm.bin
dat file : SampleData1/Fold_0/train_data.QC.dat
ebv output: SampleData1/Fold_0/train_data.QCmtg2temp.bv
sv file : SampleData1/Fold_0/train_data.QCmtg2temp.sv
ebv output: SampleData1/Fold_0/train_data.QCmtg2temp.bvr
qc file : SampleData1/Fold_0/train_data.COV_PCA_MTG2
out file : SampleData1/Fold_0/train_data.QCmtg2temp
nr mode : 1
no. ID: 380
no. grm: 1
data check >>> take 1: 3.6640000E-03
data check >>> take 2: 1.1809000E-02
1 trait 1 mean
2 trait 1 1th in qc file
3 trait 1 2th in qc file
4 trait 1 3th in qc file
5 trait 1 4th in qc file
6 trait 1 5th in qc file
7 trait 1 6th in qc file
8 trait 1 7th in qc file
*********************************************************************
MTGREML, MTGBLUP, SNP BLUP, Random regression and many
The length (row) of ID and data should be the same
The order of GRM follows ID file
The order of covariate file should be the same as ID file
Cite "Maier et al (2015) AJHG 96: 283-294" or
"Lee and van der Werf (2016) Bioinformatics 32: 1420-1422
*********************************************************************
grm file:SampleData1/Fold_0/train_data.QCtempgrm.grm.bin
grm reading done *****************************
== start 20241017 022817.540 +1000
*** number of records used ***
trait 1 : 380
V inverse done - time: 2.4580001E-03
LKH -129.2133
Ve 0.4473
Va 0.4473
derivatives done - time: 5.6800002E-04
V inverse done - time: 1.4609999E-03
likelihood nan >> update reduced by the factor
V inverse done - time: 1.4690000E-03
likelihood nan >> update reduced by the factor
V inverse done - time: 1.4800000E-03
LKH -125.6596
Ve 0.2043
Va 0.6055
derivatives done - time: 5.3700001E-04
V inverse done - time: 1.4890000E-03
likelihood nan >> update reduced by the factor
V inverse done - time: 1.4770000E-03
LKH -123.8149
Ve 0.0260
Va 0.6970
derivatives done - time: 5.3600001E-04
V inverse done - time: 1.5050001E-03
LKH -123.7388
Ve 0.0231
Va 0.6788
derivatives done - time: 5.3600001E-04
V inverse done - time: 1.5240000E-03
LKH -123.7387
Ve 0.0234
Va 0.6791
derivatives done - time: 5.3800002E-04
for BLUP solution after convergence *******************
for BLUP solution and reliability after convergence ***********
== finish 20241017 022817.644 +1000
./mtg2 -plink SampleData1/Fold_0/train_data.QC.clumped.pruned -frq 1 -out SampleData1/Fold_0/train_data.QC
******************************************************************
MTG2 version 2.22 (Oct2021)
******************************************************************
plink : SampleData1/Fold_0/train_data.QC.clumped.pruned
freq : 1
out file : SampleData1/Fold_0/train_data.QC
no. ID: 380
data check >>> take 1: 1.1280000E-03
data check >>> take 2: 1.0576000E-02
*********************************************************************
MTGREML, MTGBLUP, SNP BLUP, Random regression and many
The length (row) of ID and data should be the same
The order of GRM follows ID file
The order of covariate file should be the same as ID file
Cite "Maier et al (2015) AJHG 96: 283-294" or
"Lee and van der Werf (2016) Bioinformatics 32: 1420-1422
*********************************************************************
grm reading done *****************************
no. ID : 380
no. marker: 38646
1 - ok 2 - ok SNP-major mode for PLINK .bed file
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 99% done
*********************************************************************
allele frequency
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 99% done
./mtg2 -plink SampleData1/Fold_0/train_data.QC.clumped.pruned -vgpy SampleData1/Fold_0/train_data.QCmtg2temp -sbv a -out SampleData1/Fold_0/train_data.QCmtg2temp
./mtg2 -plink SampleData1/Fold_0/train_data.QC.clumped.pruned -vgpy SampleData1/Fold_0/train_data.QCmtgtmp -sbv a -out SampleData1/Fold_0/train_data.QCMTG_SCORE
******************************************************************
MTG2 version 2.22 (Oct2021)
******************************************************************
plink : SampleData1/Fold_0/train_data.QC.clumped.pruned
vgpy : SampleData1/Fold_0/train_data.QCmtgtmp
snp_blup : a
out file : SampleData1/Fold_0/train_data.QCMTG_SCORE
no. ID: 380
data check >>> take 1: 1.1440000E-03
data check >>> take 2: 1.0250000E-02
*********************************************************************
MTGREML, MTGBLUP, SNP BLUP, Random regression and many
The length (row) of ID and data should be the same
The order of GRM follows ID file
The order of covariate file should be the same as ID file
Cite "Maier et al (2015) AJHG 96: 283-294" or
"Lee and van der Werf (2016) Bioinformatics 32: 1420-1422
*********************************************************************
grm reading done *****************************
no. ID : 380
no. marker: 38646
1 - ok 2 - ok SNP-major mode for PLINK .bed file
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 99% done
*********************************************************************
allele frequency
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7 8 8 8 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9 9 99% done
*********************************************************************
snp blup
0 1 2
0 rs11240779 G 0.000100
1 rs2272757 G -0.000243
2 rs11260596 T -0.000407
3 rs9442373 C 0.000743
4 rs7538773 G 0.000133
0 1 2
0 rs11240779 G 0.000100
1 rs2272757 G -0.000243
2 rs11260596 T -0.000407
3 rs9442373 C 0.000743
4 rs7538773 G 0.000133
PLINK v1.90b7.2 64-bit (11 Dec 2023) www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang GNU General Public License v3
Logging to SampleData1/Fold_0/MTG2/train_data.log.
Options in effect:
--bfile SampleData1/Fold_0/train_data.QC.clumped.pruned
--extract SampleData1/Fold_0/train_data.valid.snp
--out SampleData1/Fold_0/MTG2/train_data
--q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
--score SampleData1/Fold_0/train_data.QCMTG_GWAS 1 2 3 header
63761 MB RAM detected; reserving 31880 MB for main workspace.
38646 variants loaded from .bim file.
380 people (183 males, 197 females) loaded from .fam.
380 phenotype values loaded from .fam.
--extract: 38646 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 380 founders and 0 nonfounders present.
Calculating allele frequencies... 10111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455565758596061626364656667686970717273747576777879808182838485868788899091929394959697989 done.
Total genotyping rate is exactly 1.
38646 variants and 380 people pass filters and QC.
Phenotype data is quantitative.
--score: 38646 valid predictors loaded.
Warning: 460972 lines skipped in --q-score-range data file.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/MTG2/train_data.*.profile.
PLINK v1.90b7.2 64-bit (11 Dec 2023) www.cog-genomics.org/plink/1.9/
(C) 2005-2023 Shaun Purcell, Christopher Chang GNU General Public License v3
Logging to SampleData1/Fold_0/MTG2/test_data.log.
Options in effect:
--bfile SampleData1/Fold_0/test_data.clumped.pruned
--extract SampleData1/Fold_0/train_data.valid.snp
--out SampleData1/Fold_0/MTG2/test_data
--q-score-range SampleData1/Fold_0/range_list SampleData1/Fold_0/SNP.pvalue
--score SampleData1/Fold_0/train_data.QCMTG_GWAS 1 2 3 header
63761 MB RAM detected; reserving 31880 MB for main workspace.
38646 variants loaded from .bim file.
95 people (44 males, 51 females) loaded from .fam.
95 phenotype values loaded from .fam.
--extract: 38646 variants remaining.
Using 1 thread (no multithreaded calculations invoked).
Before main variant filters, 95 founders and 0 nonfounders present.
Calculating allele frequencies... 0%1%2%3%4%5%6%7%8%9%10%11%12%13%14%15%16%17%18%19%20%21%22%23%24%25%26%27%28%29%30%31%32%33%34%35%36%37%38%39%40%41%42%43%44%45%46%47%48%49%50%51%52%53%54%55%56%57%58%59%60%61%62%63%64%65%66%67%68%69%70%71%72%73%74%75%76%77%78%79%80%81%82%83%84%85%86%87%88%89%90%91%92%93%94%95%96%97%98%99% done.
Total genotyping rate is exactly 1.
38646 variants and 95 people pass filters and QC.
Phenotype data is quantitative.
--score: 38646 valid predictors loaded.
--score: 20 ranges processed.
Results written to SampleData1/Fold_0/MTG2/test_data.*.profile.
Continous Phenotype!
Warning: 460972 lines skipped in --q-score-range data file.
Repeat the process for each fold.#
Change the foldnumber
variable.
#foldnumber = sys.argv[1]
foldnumber = "0" # Setting 'foldnumber' to "0"
Or uncomment the following line:
# foldnumber = sys.argv[1]
python MTG2.py 0
python MTG2.py 1
python MTG2.py 2
python MTG2.py 3
python MTG2.py 4
The following files should exist after the execution:
SampleData1/Fold_0/MTG2/Results.csv
SampleData1/Fold_1/MTG2/Results.csv
SampleData1/Fold_2/MTG2/Results.csv
SampleData1/Fold_3/MTG2/Results.csv
SampleData1/Fold_4/MTG2/Results.csv
Check the results file for each fold.#
import os
# List of file names to check for existence
f = [
"./"+filedirec+"/Fold_0"+os.sep+result_directory+"Results.csv",
"./"+filedirec+"/Fold_1"+os.sep+result_directory+"Results.csv",
"./"+filedirec+"/Fold_2"+os.sep+result_directory+"Results.csv",
"./"+filedirec+"/Fold_3"+os.sep+result_directory+"Results.csv",
"./"+filedirec+"/Fold_4"+os.sep+result_directory+"Results.csv",
]
# Loop through each file name in the list
for loop in range(0,5):
# Check if the file exists in the specified directory for the given fold
if os.path.exists(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv"):
temp = pd.read_csv(filedirec+os.sep+"Fold_"+str(loop)+os.sep+result_directory+os.sep+"Results.csv")
print("Fold_",loop, "Yes, the file exists.")
#print(temp.head())
print("Number of P-values processed: ",len(temp))
# Print a message indicating that the file exists
else:
# Print a message indicating that the file does not exist
print("Fold_",loop, "No, the file does not exist.")
Fold_ 0 Yes, the file exists.
Number of P-values processed: 20
Fold_ 1 Yes, the file exists.
Number of P-values processed: 20
Fold_ 2 Yes, the file exists.
Number of P-values processed: 20
Fold_ 3 Yes, the file exists.
Number of P-values processed: 20
Fold_ 4 Yes, the file exists.
Number of P-values processed: 20
Sum the results for each fold.#
print("We have to ensure when we sum the entries across all Folds, the same rows are merged!")
def sum_and_average_columns(data_frames):
"""Sum and average numerical columns across multiple DataFrames, and keep non-numerical columns unchanged."""
# Initialize DataFrame to store the summed results for numerical columns
summed_df = pd.DataFrame()
non_numerical_df = pd.DataFrame()
for df in data_frames:
# Identify numerical and non-numerical columns
numerical_cols = df.select_dtypes(include=[np.number]).columns
non_numerical_cols = df.select_dtypes(exclude=[np.number]).columns
# Sum numerical columns
if summed_df.empty:
summed_df = pd.DataFrame(0, index=range(len(df)), columns=numerical_cols)
summed_df[numerical_cols] = summed_df[numerical_cols].add(df[numerical_cols], fill_value=0)
# Keep non-numerical columns (take the first non-numerical entry for each column)
if non_numerical_df.empty:
non_numerical_df = df[non_numerical_cols]
else:
non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
# Divide the summed values by the number of dataframes to get the average
averaged_df = summed_df / len(data_frames)
# Combine numerical and non-numerical DataFrames
result_df = pd.concat([averaged_df, non_numerical_df], axis=1)
return result_df
from functools import reduce
import os
import pandas as pd
from functools import reduce
def find_common_rows(allfoldsframe):
# Define the performance columns that need to be excluded
performance_columns = [
'Train_null_model', 'Train_pure_prs', 'Train_best_model',
'Test_pure_prs', 'Test_null_model', 'Test_best_model'
]
important_columns = [
'clump_p1',
'clump_r2',
'clump_kb',
'p_window_size',
'p_slide_size',
'p_LD_threshold',
'pvalue',
'referencepanel',
'sblupmodel',
'effectsizes',
'h2model',
'model',
'numberofpca',
'tempalpha',
'l1weight',
]
# Function to remove performance columns from a DataFrame
def drop_performance_columns(df):
return df.drop(columns=performance_columns, errors='ignore')
def get_important_columns(df ):
existing_columns = [col for col in important_columns if col in df.columns]
if existing_columns:
return df[existing_columns].copy()
else:
return pd.DataFrame()
# Drop performance columns from all DataFrames in the list
allfoldsframe_dropped = [drop_performance_columns(df) for df in allfoldsframe]
# Get the important columns.
allfoldsframe_dropped = [get_important_columns(df) for df in allfoldsframe_dropped]
# Iteratively find common rows and track unique and common rows
common_rows = allfoldsframe_dropped[0]
for i in range(1, len(allfoldsframe_dropped)):
# Get the next DataFrame
next_df = allfoldsframe_dropped[i]
# Count unique rows in the current DataFrame and the next DataFrame
unique_in_common = common_rows.shape[0]
unique_in_next = next_df.shape[0]
# Find common rows between the current common_rows and the next DataFrame
common_rows = pd.merge(common_rows, next_df, how='inner')
# Count the common rows after merging
common_count = common_rows.shape[0]
# Print the unique and common row counts
print(f"Iteration {i}:")
print(f"Unique rows in current common DataFrame: {unique_in_common}")
print(f"Unique rows in next DataFrame: {unique_in_next}")
print(f"Common rows after merge: {common_count}\n")
# Now that we have the common rows, extract these from the original DataFrames
extracted_common_rows_frames = []
for original_df in allfoldsframe:
# Merge the common rows with the original DataFrame, keeping only the rows that match the common rows
extracted_common_rows = pd.merge(common_rows, original_df, how='inner', on=common_rows.columns.tolist())
# Add the DataFrame with the extracted common rows to the list
extracted_common_rows_frames.append(extracted_common_rows)
# Print the number of rows in the common DataFrames
for i, df in enumerate(extracted_common_rows_frames):
print(f"DataFrame {i + 1} with extracted common rows has {df.shape[0]} rows.")
# Return the list of DataFrames with extracted common rows
return extracted_common_rows_frames
# Example usage (assuming allfoldsframe is populated as shown earlier):
allfoldsframe = []
# Loop through each file name in the list
for loop in range(0, 5):
# Check if the file exists in the specified directory for the given fold
file_path = os.path.join(filedirec, "Fold_" + str(loop), result_directory, "Results.csv")
if os.path.exists(file_path):
allfoldsframe.append(pd.read_csv(file_path))
# Print a message indicating that the file exists
print("Fold_", loop, "Yes, the file exists.")
else:
# Print a message indicating that the file does not exist
print("Fold_", loop, "No, the file does not exist.")
# Find the common rows across all folds and return the list of extracted common rows
extracted_common_rows_list = find_common_rows(allfoldsframe)
# Sum the values column-wise
# For string values, do not sum it the values are going to be the same for each fold.
# Only sum the numeric values.
divided_result = sum_and_average_columns(extracted_common_rows_list)
print(divided_result)
We have to ensure when we sum the entries across all Folds, the same rows are merged!
Fold_ 0 Yes, the file exists.
Fold_ 1 Yes, the file exists.
Fold_ 2 Yes, the file exists.
Fold_ 3 Yes, the file exists.
Fold_ 4 Yes, the file exists.
Iteration 1:
Unique rows in current common DataFrame: 20
Unique rows in next DataFrame: 20
Common rows after merge: 20
Iteration 2:
Unique rows in current common DataFrame: 20
Unique rows in next DataFrame: 20
Common rows after merge: 20
Iteration 3:
Unique rows in current common DataFrame: 20
Unique rows in next DataFrame: 20
Common rows after merge: 20
Iteration 4:
Unique rows in current common DataFrame: 20
Unique rows in next DataFrame: 20
Common rows after merge: 20
DataFrame 1 with extracted common rows has 20 rows.
DataFrame 2 with extracted common rows has 20 rows.
DataFrame 3 with extracted common rows has 20 rows.
DataFrame 4 with extracted common rows has 20 rows.
DataFrame 5 with extracted common rows has 20 rows.
clump_p1 clump_r2 clump_kb p_window_size p_slide_size p_LD_threshold \
0 1.0 0.1 200.0 200.0 50.0 0.25
1 1.0 0.1 200.0 200.0 50.0 0.25
2 1.0 0.1 200.0 200.0 50.0 0.25
3 1.0 0.1 200.0 200.0 50.0 0.25
4 1.0 0.1 200.0 200.0 50.0 0.25
5 1.0 0.1 200.0 200.0 50.0 0.25
6 1.0 0.1 200.0 200.0 50.0 0.25
7 1.0 0.1 200.0 200.0 50.0 0.25
8 1.0 0.1 200.0 200.0 50.0 0.25
9 1.0 0.1 200.0 200.0 50.0 0.25
10 1.0 0.1 200.0 200.0 50.0 0.25
11 1.0 0.1 200.0 200.0 50.0 0.25
12 1.0 0.1 200.0 200.0 50.0 0.25
13 1.0 0.1 200.0 200.0 50.0 0.25
14 1.0 0.1 200.0 200.0 50.0 0.25
15 1.0 0.1 200.0 200.0 50.0 0.25
16 1.0 0.1 200.0 200.0 50.0 0.25
17 1.0 0.1 200.0 200.0 50.0 0.25
18 1.0 0.1 200.0 200.0 50.0 0.25
19 1.0 0.1 200.0 200.0 50.0 0.25
pvalue numberofpca tempalpha l1weight numberofvariants \
0 1.000000e-10 6.0 0.1 0.1 119360.8
1 3.359818e-10 6.0 0.1 0.1 119360.8
2 1.128838e-09 6.0 0.1 0.1 119360.8
3 3.792690e-09 6.0 0.1 0.1 119360.8
4 1.274275e-08 6.0 0.1 0.1 119360.8
5 4.281332e-08 6.0 0.1 0.1 119360.8
6 1.438450e-07 6.0 0.1 0.1 119360.8
7 4.832930e-07 6.0 0.1 0.1 119360.8
8 1.623777e-06 6.0 0.1 0.1 119360.8
9 5.455595e-06 6.0 0.1 0.1 119360.8
10 1.832981e-05 6.0 0.1 0.1 119360.8
11 6.158482e-05 6.0 0.1 0.1 119360.8
12 2.069138e-04 6.0 0.1 0.1 119360.8
13 6.951928e-04 6.0 0.1 0.1 119360.8
14 2.335721e-03 6.0 0.1 0.1 119360.8
15 7.847600e-03 6.0 0.1 0.1 119360.8
16 2.636651e-02 6.0 0.1 0.1 119360.8
17 8.858668e-02 6.0 0.1 0.1 119360.8
18 2.976351e-01 6.0 0.1 0.1 119360.8
19 1.000000e+00 6.0 0.1 0.1 119360.8
Train_pure_prs Train_null_model Train_best_model Test_pure_prs \
0 0.000005 0.233042 0.615159 5.278901e-07
1 0.000005 0.233042 0.649624 5.881880e-07
2 0.000005 0.233042 0.689173 8.325585e-07
3 0.000005 0.233042 0.723270 5.760258e-07
4 0.000005 0.233042 0.756249 4.012022e-07
5 0.000005 0.233042 0.794610 3.769538e-07
6 0.000005 0.233042 0.822460 4.669503e-07
7 0.000004 0.233042 0.848809 3.254035e-07
8 0.000004 0.233042 0.872329 1.745692e-07
9 0.000004 0.233042 0.897692 1.378133e-07
10 0.000004 0.233042 0.915485 1.570614e-07
11 0.000004 0.233042 0.932714 2.162367e-07
12 0.000004 0.233042 0.949348 1.252390e-07
13 0.000004 0.233042 0.964592 1.365863e-07
14 0.000004 0.233042 0.974871 1.337352e-07
15 0.000004 0.233042 0.982870 1.101902e-07
16 0.000004 0.233042 0.988205 -2.707081e-09
17 0.000004 0.233042 0.992633 4.757402e-08
18 0.000004 0.233042 0.995338 9.350707e-09
19 0.000004 0.233042 0.997570 -4.664821e-08
Test_null_model Test_best_model sblupmodel
0 0.14081 -0.082335 a
1 0.14081 -0.050111 a
2 0.14081 -0.044346 a
3 0.14081 -0.045663 a
4 0.14081 -0.010222 a
5 0.14081 0.019770 a
6 0.14081 0.048416 a
7 0.14081 0.036113 a
8 0.14081 0.020400 a
9 0.14081 0.037645 a
10 0.14081 0.075936 a
11 0.14081 0.113128 a
12 0.14081 0.115224 a
13 0.14081 0.130171 a
14 0.14081 0.153894 a
15 0.14081 0.153913 a
16 0.14081 0.129849 a
17 0.14081 0.150262 a
18 0.14081 0.144988 a
19 0.14081 0.134872 a
/tmp/ipykernel_3531592/2573346657.py:24: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_3531592/2573346657.py:24: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_3531592/2573346657.py:24: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
/tmp/ipykernel_3531592/2573346657.py:24: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
non_numerical_df[non_numerical_cols] = non_numerical_df[non_numerical_cols].combine_first(df[non_numerical_cols])
Results#
1. Reporting Based on Best Training Performance:#
One can report the results based on the best performance of the training data. For example, if for a specific combination of hyperparameters, the training performance is high, report the corresponding test performance.
Example code:
df = divided_result.sort_values(by='Train_best_model', ascending=False) print(df.iloc[0].to_markdown())
Binary Phenotypes Result Analysis#
You can find the performance quality for binary phenotype using the following template:
This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.
We classified performance based on the following table:
Performance Level |
Range |
---|---|
Low Performance |
0 to 0.5 |
Moderate Performance |
0.6 to 0.7 |
High Performance |
0.8 to 1 |
You can match the performance based on the following scenarios:
Scenario |
What’s Happening |
Implication |
---|---|---|
High Test, High Train |
The model performs well on both training and test datasets, effectively learning the underlying patterns. |
The model is well-tuned, generalizes well, and makes accurate predictions on both datasets. |
High Test, Moderate Train |
The model generalizes well but may not be fully optimized on training data, missing some underlying patterns. |
The model is fairly robust but may benefit from further tuning or more training to improve its learning. |
High Test, Low Train |
An unusual scenario, potentially indicating data leakage or overestimation of test performance. |
The model’s performance is likely unreliable; investigate potential data issues or random noise. |
Moderate Test, High Train |
The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns. |
The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data. |
Moderate Test, Moderate Train |
The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others. |
The model is moderately fitting; further improvements could be made in both training and generalization. |
Moderate Test, Low Train |
The model underperforms on training data and doesn’t generalize well, leading to moderate test performance. |
The model may need more complexity, additional features, or better training to improve on both datasets. |
Low Test, High Train |
The model overfits the training data, performing poorly on the test set. |
The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting. |
Low Test, Low Train |
The model performs poorly on both training and test datasets, failing to learn the data patterns effectively. |
The model is underfitting; it may need more complexity, additional features, or more data to improve performance. |
Recommendations for Publishing Results#
When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.
For most phenotypes, results typically fall in the moderate train and moderate test performance category.
Continuous Phenotypes Result Analysis#
You can find the performance quality for continuous phenotypes using the following template:
This figure shows the 8 different scenarios that can exist in the results, and the following table explains each scenario.
We classified performance based on the following table:
Performance Level |
Range |
---|---|
Low Performance |
0 to 0.2 |
Moderate Performance |
0.3 to 0.7 |
High Performance |
0.8 to 1 |
You can match the performance based on the following scenarios:
Scenario |
What’s Happening |
Implication |
---|---|---|
High Test, High Train |
The model performs well on both training and test datasets, effectively learning the underlying patterns. |
The model is well-tuned, generalizes well, and makes accurate predictions on both datasets. |
High Test, Moderate Train |
The model generalizes well but may not be fully optimized on training data, missing some underlying patterns. |
The model is fairly robust but may benefit from further tuning or more training to improve its learning. |
High Test, Low Train |
An unusual scenario, potentially indicating data leakage or overestimation of test performance. |
The model’s performance is likely unreliable; investigate potential data issues or random noise. |
Moderate Test, High Train |
The model fits the training data well but doesn’t generalize as effectively, capturing only some test patterns. |
The model is slightly overfitting; adjustments may be needed to improve generalization on unseen data. |
Moderate Test, Moderate Train |
The model shows balanced but moderate performance on both datasets, capturing some patterns but missing others. |
The model is moderately fitting; further improvements could be made in both training and generalization. |
Moderate Test, Low Train |
The model underperforms on training data and doesn’t generalize well, leading to moderate test performance. |
The model may need more complexity, additional features, or better training to improve on both datasets. |
Low Test, High Train |
The model overfits the training data, performing poorly on the test set. |
The model doesn’t generalize well; simplifying the model or using regularization may help reduce overfitting. |
Low Test, Low Train |
The model performs poorly on both training and test datasets, failing to learn the data patterns effectively. |
The model is underfitting; it may need more complexity, additional features, or more data to improve performance. |
Recommendations for Publishing Results#
When publishing results, scenarios with moderate train and moderate test performance can be used for complex phenotypes or diseases. However, results showing high train and moderate test, high train and high test, and moderate train and high test are recommended.
For most continuous phenotypes, results typically fall in the moderate train and moderate test performance category.
2. Reporting Generalized Performance:#
One can also report the generalized performance by calculating the difference between the training and test performance, and the sum of the test and training performance. Report the result or hyperparameter combination for which the sum is high and the difference is minimal.
Example code:
df = divided_result.copy() df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model']) df['Sum'] = df['Train_best_model'] + df['Test_best_model'] sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True]) print(sorted_df.iloc[0].to_markdown())
3. Reporting Hyperparameters Affecting Test and Train Performance:#
Find the hyperparameters that have more than one unique value and calculate their correlation with the following columns to understand how they are affecting the performance of train and test sets:
Train_null_model
Train_pure_prs
Train_best_model
Test_pure_prs
Test_null_model
Test_best_model
4. Other Analysis#
Once you have the results, you can find how hyperparameters affect the model performance.
Analysis, like overfitting and underfitting, can be performed as well.
The way you are going to report the results can vary.
Results can be visualized, and other patterns in the data can be explored.
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib notebook
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
df = divided_result.sort_values(by='Train_best_model', ascending=False)
print("1. Reporting Based on Best Training Performance:\n")
print(df.iloc[0].to_markdown())
df = divided_result.copy()
# Plot Train and Test best models against p-values
plt.figure(figsize=(10, 6))
plt.plot(df['pvalue'], df['Train_best_model'], label='Train_best_model', marker='o', color='royalblue')
plt.plot(df['pvalue'], df['Test_best_model'], label='Test_best_model', marker='o', color='darkorange')
# Highlight the p-value where both train and test are high
best_index = df[['Train_best_model']].sum(axis=1).idxmax()
best_pvalue = df.loc[best_index, 'pvalue']
best_train = df.loc[best_index, 'Train_best_model']
best_test = df.loc[best_index, 'Test_best_model']
# Use dark colors for the circles
plt.scatter(best_pvalue, best_train, color='darkred', s=100, label=f'Best Performance (Train)', edgecolor='black', zorder=5)
plt.scatter(best_pvalue, best_test, color='darkblue', s=100, label=f'Best Performance (Test)', edgecolor='black', zorder=5)
# Annotate the best performance with p-value, train, and test values
plt.text(best_pvalue, best_train, f'p={best_pvalue:.4g}\nTrain={best_train:.4g}', ha='right', va='bottom', fontsize=9, color='darkred')
plt.text(best_pvalue, best_test, f'p={best_pvalue:.4g}\nTest={best_test:.4g}', ha='right', va='top', fontsize=9, color='darkblue')
# Calculate Difference and Sum
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']
# Sort the DataFrame
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
#sorted_df = df.sort_values(by=[ 'Difference','Sum'], ascending=[ True,False])
# Highlight the general performance
general_index = sorted_df.index[0]
general_pvalue = sorted_df.loc[general_index, 'pvalue']
general_train = sorted_df.loc[general_index, 'Train_best_model']
general_test = sorted_df.loc[general_index, 'Test_best_model']
plt.scatter(general_pvalue, general_train, color='darkgreen', s=150, label='General Performance (Train)', edgecolor='black', zorder=6)
plt.scatter(general_pvalue, general_test, color='darkorange', s=150, label='General Performance (Test)', edgecolor='black', zorder=6)
# Annotate the general performance with p-value, train, and test values
plt.text(general_pvalue, general_train, f'p={general_pvalue:.4g}\nTrain={general_train:.4g}', ha='left', va='bottom', fontsize=9, color='darkgreen')
plt.text(general_pvalue, general_test, f'p={general_pvalue:.4g}\nTest={general_test:.4g}', ha='left', va='top', fontsize=9, color='darkorange')
# Add labels and legend
plt.xlabel('p-value')
plt.ylabel('Model Performance')
plt.title('Train vs Test Best Models')
plt.legend()
plt.show()
print("2. Reporting Generalized Performance:\n")
df = divided_result.copy()
df['Difference'] = abs(df['Train_best_model'] - df['Test_best_model'])
df['Sum'] = df['Train_best_model'] + df['Test_best_model']
sorted_df = df.sort_values(by=['Sum', 'Difference'], ascending=[False, True])
print(sorted_df.iloc[0].to_markdown())
print("3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':\n")
print("3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.")
print("3. We performed this analysis for those hyperparameters that have more than one unique value.")
correlation_columns = [
'Train_null_model', 'Train_pure_prs', 'Train_best_model',
'Test_pure_prs', 'Test_null_model', 'Test_best_model'
]
hyperparams = [col for col in divided_result.columns if len(divided_result[col].unique()) > 1]
hyperparams = list(set(hyperparams+correlation_columns))
# Separate numeric and string columns
numeric_hyperparams = [col for col in hyperparams if pd.api.types.is_numeric_dtype(divided_result[col])]
string_hyperparams = [col for col in hyperparams if pd.api.types.is_string_dtype(divided_result[col])]
# Encode string columns using one-hot encoding
divided_result_encoded = pd.get_dummies(divided_result, columns=string_hyperparams)
# Combine numeric hyperparams with the new one-hot encoded columns
encoded_columns = [col for col in divided_result_encoded.columns if col.startswith(tuple(string_hyperparams))]
hyperparams = numeric_hyperparams + encoded_columns
# Calculate correlations
correlations = divided_result_encoded[hyperparams].corr()
# Display correlation of hyperparameters with train/test performance columns
hyperparam_correlations = correlations.loc[hyperparams, correlation_columns]
hyperparam_correlations = hyperparam_correlations.fillna(0)
# Plotting the correlation heatmap
plt.figure(figsize=(12, 8))
ax = sns.heatmap(hyperparam_correlations, annot=True, cmap='viridis', fmt='.2f', cbar=True)
ax.set_xticklabels(ax.get_xticklabels(), rotation=90, ha='right')
# Rotate y-axis labels to horizontal
#ax.set_yticklabels(ax.get_yticklabels(), rotation=0, va='center')
plt.title('Correlation of Hyperparameters with Train/Test Performance')
plt.show()
sns.set_theme(style="whitegrid") # Choose your preferred style
pairplot = sns.pairplot(divided_result_encoded[hyperparams],hue = 'Test_best_model', palette='viridis')
# Adjust the figure size
pairplot.fig.set_size_inches(15, 15) # You can adjust the size as needed
for ax in pairplot.axes.flatten():
ax.set_xlabel(ax.get_xlabel(), rotation=90, ha='right') # X-axis labels vertical
#ax.set_ylabel(ax.get_ylabel(), rotation=0, va='bottom') # Y-axis labels horizontal
# Show the plot
plt.show()
1. Reporting Based on Best Training Performance:
| | 19 |
|:-----------------|:------------------------|
| clump_p1 | 1.0 |
| clump_r2 | 0.1 |
| clump_kb | 200.0 |
| p_window_size | 200.0 |
| p_slide_size | 50.0 |
| p_LD_threshold | 0.25 |
| pvalue | 1.0 |
| numberofpca | 6.0 |
| tempalpha | 0.1 |
| l1weight | 0.1 |
| numberofvariants | 119360.8 |
| Train_pure_prs | 4.174464414630208e-06 |
| Train_null_model | 0.23304238949622894 |
| Train_best_model | 0.9975696809067985 |
| Test_pure_prs | -4.6648211404765056e-08 |
| Test_null_model | 0.14080992136239043 |
| Test_best_model | 0.13487203221110902 |
| sblupmodel | a |
2. Reporting Generalized Performance:
| | 17 |
|:-----------------|:-----------------------|
| clump_p1 | 1.0 |
| clump_r2 | 0.1 |
| clump_kb | 200.0 |
| p_window_size | 200.0 |
| p_slide_size | 50.0 |
| p_LD_threshold | 0.25 |
| pvalue | 0.0885866790410083 |
| numberofpca | 6.0 |
| tempalpha | 0.1 |
| l1weight | 0.1 |
| numberofvariants | 119360.8 |
| Train_pure_prs | 4.290018806796248e-06 |
| Train_null_model | 0.23304238949622894 |
| Train_best_model | 0.9926331956337778 |
| Test_pure_prs | 4.7574015904494625e-08 |
| Test_null_model | 0.14080992136239043 |
| Test_best_model | 0.1502621991374022 |
| sblupmodel | a |
| Difference | 0.8423709964963756 |
| Sum | 1.14289539477118 |
3. Reporting the correlation of hyperparameters and the performance of 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model':
3. For string hyperparameters, we used one-hot encoding to find the correlation between string hyperparameters and 'Train_null_model', 'Train_pure_prs', 'Train_best_model', 'Test_pure_prs', 'Test_null_model', and 'Test_best_model'.
3. We performed this analysis for those hyperparameters that have more than one unique value.