OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

Define input files for snakemake using glob

  • Thread starter Thread starter Whitehot
  • Start date Start date
W

Whitehot

Guest
I am building a snakemake pipeline for some bioinformatics analyses, and I'm a beginner with the tool. The end users will be mainly biologists with little to no IT training, so I'm trying to make it quite user-friendly, in particular not needing much information in the config file (a previous bioinformatician in the institute had built a more robust pipeline but that required a lot of information in the config file, and it fell into disuse).

One rule that I would like to implement is to autodetect what .fastq (raw data) files are given in their specific directory, align them all and run some QC steps. In particular, deepTools has a plotFingerprint tool that compares the distribution of data in a control data file to the distribution in the treatment data files. For this, I would like to be able to autodetect which batches of data files go together as well.

My file architecture is set up like so: DATA/<FILE TYPE>/<EXP NAME>/<data files>, so for example DATA/FASTQ/CTCF_H3K9ac/ contains:

Code:
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
CTCF_T7_neg_2.fq.gz
CTCF_T7_neg_3.fq.gz
CTCF_T7_pos_2.fq.gz
CTCF_T7_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
H3K9ac_T7_neg_2.fq.gz
H3K9ac_T7_neg_3.fq.gz
H3K9ac_T7_pos_2.fq.gz
H3K9ac_T7_pos_3.fq.gz
Input_T1_pos.fq.gz
Input_T7_neg.fq.gz
Input_T7_pos.fq.gz

For those not familiar with ChIP-seq, each Input file is a control data file for normalisation, and CTCF and H3K9ac are experimental data to be normalised. So one batch of files I would like to process and then send to plotFingerprint would be

Code:
Input_T1_pos.fq.gz
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz

With that in mind, I would need to give to my bamFingerprint snakemake rule the path to the aligned versions of those files, i.e.

Code:
DATA/BAM/CTCF_H3K9ac/Input_T1_pos.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_3.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_3.bam

(I would also need each of those files indexed, so all of those again with the .bai suffix for the snakemake input, but that's trivial once I've managed to get all the .bam paths. The snakemake rules I have to get up to that point all work, I've tested them independantly.)

There is also a special case where an experiment could be run using paired-end sequencing, so the FASTQ dir would contain exp_fw.fq.gz and exp_rv.fq.gz and would need to be mapped to exp_pe.bam, but that doesn't seem like a massive exception to handle.

Originally I had tried using list comprehensions to create the list of input files, using this:

Code:
def exps_from_inp(ifile): # not needed?
    path, fname = ifile.split("Input")
    conds, ftype = fname.split(".", 1)
    return [f for f in glob.glob(path+"*"+conds+"*."+ftype)]
    

def bam_name_from_fq_name(fqpath, suffix=""):
    if re.search("filtered", fqpath) :
        return # need to remove files that were already filtered that could be in the same dir
    else:
        return fqpath.replace("FASTQ", "BAM").replace(".fq.gz", ".bam") + suffix

rule bamFingerprint:
    input:
        bam=[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
        bai=[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
    ...

Those list comprehensions generated the correct list of files when I tried them in python, using the values that expdir and expconds take when I dry run the pipeline. However, during that dry run, the {input.bam} wildcard in the shell command never gets assigned a value.

I went digging in the docs and found this page which implies that snakemake does not handle list comprehensions, and the expand function is its replacement. In my case, the experiment numbers (the _2 and _3 in the file names) are pretty variable, they're sometimes just random numbers, some experiments have 2 reps and some have 3, ... All these factors mean that using expand without a lot of additional considerations would be tricky (for the rep number, finding the experiment names would be fairly easy).

I then tried wrapping the list comprehensions in a function and running those in the input of my rule, but those failed, as did wrapping those function in one big one and using unpack (although I could be using that wrong, I'm not entirely sure I understood how unpack works).

Code:
def get_fingerprint_bam_inputfiles(wildcards):
    return {"bams": get_fingerprint_bam_bams(wildcards),
            "bais": get_fingerprint_bam_bais(wildcards)}

def get_fingerprint_bam_bams(wildcards):
    [bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

def get_fingerprint_bam_bais(wildcards):
    [bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

rule bamFingerprint:
    input:
        bams=get_fingerprint_bam_bams
        bais=get_fingerprint_bam_bais
    ...

rule bamFingerprint_unpack:
    input:
        unpack(get_fingerprint_bam_inputfiles)
    ...

So now I'm feeling pretty stuck in this approach. How can I autodetect these experiment batches and give the correct bam file paths to my bamFingerprint rule? I'm not even sure which approach I should go for.
<p>I am building a snakemake pipeline for some bioinformatics analyses, and I'm a beginner with the tool. The end users will be mainly biologists with little to no IT training, so I'm trying to make it quite user-friendly, in particular not needing much information in the config file (a previous bioinformatician in the institute had built a more robust pipeline but that required a lot of information in the config file, and it fell into disuse).</p>
<p>One rule that I would like to implement is to autodetect what <code>.fastq</code> (raw data) files are given in their specific directory, align them all and run some QC steps. In particular, deepTools has a <a href="https://deeptools.readthedocs.io/en/develop/content/tools/plotFingerprint.html" rel="nofollow noreferrer">plotFingerprint</a> tool that compares the distribution of data in a control data file to the distribution in the treatment data files. For this, I would like to be able to autodetect which batches of data files go together as well.</p>
<p>My file architecture is set up like so: <code>DATA/<FILE TYPE>/<EXP NAME>/<data files></code>, so for example <code>DATA/FASTQ/CTCF_H3K9ac/</code> contains:</p>
<pre><code>CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
CTCF_T7_neg_2.fq.gz
CTCF_T7_neg_3.fq.gz
CTCF_T7_pos_2.fq.gz
CTCF_T7_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
H3K9ac_T7_neg_2.fq.gz
H3K9ac_T7_neg_3.fq.gz
H3K9ac_T7_pos_2.fq.gz
H3K9ac_T7_pos_3.fq.gz
Input_T1_pos.fq.gz
Input_T7_neg.fq.gz
Input_T7_pos.fq.gz
</code></pre>
<p>For those not familiar with ChIP-seq, each <code>Input</code> file is a control data file for normalisation, and <code>CTCF</code> and <code>H3K9ac</code> are experimental data to be normalised. So one batch of files I would like to process and then send to <code>plotFingerprint</code> would be</p>
<pre><code>Input_T1_pos.fq.gz
CTCF_T1_pos_2.fq.gz
CTCF_T1_pos_3.fq.gz
H3K9ac_T1_pos_2.fq.gz
H3K9ac_T1_pos_3.fq.gz
</code></pre>
<p>With that in mind, I would need to give to my <code>bamFingerprint</code> snakemake rule the path to the aligned versions of those files, i.e.</p>
<pre><code>DATA/BAM/CTCF_H3K9ac/Input_T1_pos.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/CTCF_T1_pos_3.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_2.bam
DATA/BAM/CTCF_H3K9ac/H3K9ac_T1_pos_3.bam
</code></pre>
<p>(I would also need each of those files indexed, so all of those again with the <code>.bai</code> suffix for the snakemake input, but that's trivial once I've managed to get all the <code>.bam</code> paths. The snakemake rules I have to get up to that point all work, I've tested them independantly.)</p>
<p>There is also a special case where an experiment could be run using paired-end sequencing, so the <code>FASTQ</code> dir would contain <code>exp_fw.fq.gz</code> and <code>exp_rv.fq.gz</code> and would need to be mapped to <code>exp_pe.bam</code>, but that doesn't seem like a massive exception to handle.</p>
<p>Originally I had tried using list comprehensions to create the list of input files, using this:</p>
<pre><code>def exps_from_inp(ifile): # not needed?
path, fname = ifile.split("Input")
conds, ftype = fname.split(".", 1)
return [f for f in glob.glob(path+"*"+conds+"*."+ftype)]


def bam_name_from_fq_name(fqpath, suffix=""):
if re.search("filtered", fqpath) :
return # need to remove files that were already filtered that could be in the same dir
else:
return fqpath.replace("FASTQ", "BAM").replace(".fq.gz", ".bam") + suffix

rule bamFingerprint:
input:
bam=[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
bai=[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{expdir}/Input_{expconds}.fq.gz")]
...
</code></pre>
<p>Those list comprehensions generated the correct list of files when I tried them in python, using the values that <code>expdir</code> and <code>expconds</code> take when I dry run the pipeline. However, during that dry run, the <code>{input.bam}</code> wildcard in the shell command never gets assigned a value.</p>
<p>I went digging in the docs and found <a href="https://snakemake.readthedocs.io/en/stable/snakefiles/rules.html" rel="nofollow noreferrer">this page</a> which implies that snakemake does not handle list comprehensions, and the <code>expand</code> function is its replacement. In my case, the experiment numbers (the <code>_2</code> and <code>_3</code> in the file names) are pretty variable, they're sometimes just random numbers, some experiments have 2 reps and some have 3, ... All these factors mean that using <code>expand</code> without a lot of additional considerations would be tricky (for the rep number, finding the experiment names would be fairly easy).</p>
<p>I then tried wrapping the list comprehensions in a function and running those in the input of my rule, but those failed, as did wrapping those function in one big one and using <code>unpack</code> (although I could be using that wrong, I'm not entirely sure I understood how <code>unpack</code> works).</p>
<pre><code>def get_fingerprint_bam_inputfiles(wildcards):
return {"bams": get_fingerprint_bam_bams(wildcards),
"bais": get_fingerprint_bam_bais(wildcards)}

def get_fingerprint_bam_bams(wildcards):
[bam_name_from_fq_name(f) for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

def get_fingerprint_bam_bais(wildcards):
[bam_name_from_fq_name(f, suffix=".bai") for f in exps_from_inp("DATA/FASTQ/{wildcards.expdir}/Input_{wildcards.expconds}.fq.gz")]

rule bamFingerprint:
input:
bams=get_fingerprint_bam_bams
bais=get_fingerprint_bam_bais
...

rule bamFingerprint_unpack:
input:
unpack(get_fingerprint_bam_inputfiles)
...

</code></pre>
<p>So now I'm feeling pretty stuck in this approach. How can I autodetect these experiment batches and give the correct bam file paths to my <code>bamFingerprint</code> rule? I'm not even sure which approach I should go for.</p>
 

Latest posts

U
Replies
0
Views
1
user3658366
U
G
Replies
0
Views
1
Giampaolo Levorato
G
M
Replies
0
Views
1
Marcelo Rodrigo Nascimento
M
Top