meta data for this page
  •  

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
general:computerenvironment:tasksetbash [2026/01/13 09:27] – [2. Custom exercises] felixgeneral:computerenvironment:tasksetbash [2026/01/13 10:33] (current) felix
Line 1: Line 1:
 +<WRAP tabs>
 +  * [[beh_molevol:start|EcoEvo main]]
 +  * [[beh_molevol:mee:workplan|EcoEvo workplan]]
 +  * [[mbw_bioinf:start|MolBio main]]
 +  * [[mbw_bioinf:mastermbw:2022:workpackages|MolBio workplan]]
 +  * [[pbioc_basics:start|PBioC main]]
 +  * [[pbioc_basics:workplan|PBioC workplan]]
 +  * [[physaliacg:|Back to Physalia course]]
 +</WRAP>
 +
 +
 ====== Working with the command line ====== ====== Working with the command line ======
  
Line 92: Line 103:
  
   - To start, [[https://github.com/BIONF/digital_competence|click this link to download our exercises from GitHub]]. <WRAP>   - To start, [[https://github.com/BIONF/digital_competence|click this link to download our exercises from GitHub]]. <WRAP>
-<hidden Hint>The easiest way to start the download is to click on the green "Code button" in the top right corner and select "Download ZIP" (Figure {{ref>git}}). Don't forget to unpack the directory with a [[https://en.wikipedia.org/wiki/ZIP_%28file_format%29|ZIP]] file manager of your choice.+<hidden Hint>The easiest way to start the download is to click on the green "Code button" in the top right corner and select "Download ZIP" (Figure {{ref>git}}).
 <figure git> <figure git>
 {{:general:computerenvironment:download_digital_competence.png?700|}} {{:general:computerenvironment:download_digital_competence.png?700|}}
Line 100: Line 111:
 </figure> </figure>
 </hidden></WRAP> </hidden></WRAP>
-  - [[general:computerenvironment:openterminal|Open a terminal]] on your system and navigate to the directory you have just downloaded and extracted. Now, you can start Jupyter notebook by simply typing:<WRAP>+  - Unpack the directory with a [[https://en.wikipedia.org/wiki/ZIP_%28file_format%29|ZIP]] file manager of your choice, or the ''unzip'' command in the terminal 
 +  - [[general:computerenvironment:openterminal|Open a terminal]] on your system and use ''cd'' to navigate to the //digital_competence// directory that you have just downloaded and extracted.  
 +  - Now, make sure that the your ''(jupyter)'' Anaconda environment is activated in your current terminal session. Then start the Jupyter notebook server with:<WRAP>
 <code>jupyter notebook</code></WRAP> <code>jupyter notebook</code></WRAP>
-  - This will open a window in your browser with which you can navigate to the `.ipynb` files of each exercise. The notebooks contain a set of instructions and some tasks. They also contain code cells in which you should document the command which solve the task. +  - This will open a window in your browser with which you can navigate to the `.ipynb` files of each exercise. The notebooks contain a set of instructions and some tasks. They also contain code cells in which you should document the command which solve the task. You can also use the code cells to experiment and find your solution, but we recommend to **find out the solution to each task in your local terminal**
- +
-You can also use the code cells to experiment and find your solution, but we encourage you to try out all commands in your local terminal as well+
 ==== 3. Using a computer cluster ==== ==== 3. Using a computer cluster ====
 In the previous exercises you have learned to write commands and pipelines in the BASH shell. Now we want to look at how we can expand our analyses to large-scale analyses or datasets. For such resource heavy jobs we have a computer cluster available which is managed by the SLURM architecture. Please read through the [[https://applbio.biologie.uni-frankfurt.de/teaching/wiki/doku.php?id=general:computerenvironment:slurm|information about SLURM]] and then solve the task below. In the previous exercises you have learned to write commands and pipelines in the BASH shell. Now we want to look at how we can expand our analyses to large-scale analyses or datasets. For such resource heavy jobs we have a computer cluster available which is managed by the SLURM architecture. Please read through the [[https://applbio.biologie.uni-frankfurt.de/teaching/wiki/doku.php?id=general:computerenvironment:slurm|information about SLURM]] and then solve the task below.