meta data for this page
Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
general:bioseqanalysis:shotgunassembly [2021/10/19 21:02] – ingo | general:bioseqanalysis:shotgunassembly [2021/10/19 21:48] (current) – [Method selection] ingo | ||
---|---|---|---|
Line 17: | Line 17: | ||
<figure KMER2> | <figure KMER2> | ||
- | {{ : | + | {{ : |
< | < | ||
Line 27: | Line 27: | ||
Meanwhile a plethora of different WGS assemblers exist, and it is hard to decide a priori which assembler performs best for a given genome and WGS data set. However, determining how good an assembly is, can be very difficult and there’s even a competition – the Assemblathon – which intends to benchmark current state-of-the-art methods in genome assembly (Earl, et al. 2011; Bradnam, et al. 2013). Still the problem exists, to what extent the insights from these benchmarks can be generalized to any particular assembly problem. Given the complexity of the assembly problem, it is easily conceivable that an algorithm that performs non-optimal on any of the benchmark data sets happens to be superior for your particular assembly problem. It is, thus, that separate benchmarks are generated for particular subsets of genomes (e.g. Abbas, et al. 2014). As an alternative, | Meanwhile a plethora of different WGS assemblers exist, and it is hard to decide a priori which assembler performs best for a given genome and WGS data set. However, determining how good an assembly is, can be very difficult and there’s even a competition – the Assemblathon – which intends to benchmark current state-of-the-art methods in genome assembly (Earl, et al. 2011; Bradnam, et al. 2013). Still the problem exists, to what extent the insights from these benchmarks can be generalized to any particular assembly problem. Given the complexity of the assembly problem, it is easily conceivable that an algorithm that performs non-optimal on any of the benchmark data sets happens to be superior for your particular assembly problem. It is, thus, that separate benchmarks are generated for particular subsets of genomes (e.g. Abbas, et al. 2014). As an alternative, | ||
- | In our exercises on de novo whole genome shotgun assembly, we will concentrate on SPades (Bankevich, et al. 2012). SPades constructs multi-sized de Bruijn graphs with different values for // | + | ===== Task list ===== |
+ | <WRAP tabs> | ||
+ | * [[ecoevo_molevoll: | ||
+ | </ |