Skip to content

This project repository contains all files needed to build the NIAS server so far, as well as the instructions on how to use it

License

Notifications You must be signed in to change notification settings

gAlmeidaFerreira/NIAS-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NIAS server

A schematic describing the server is illustrated below.

Overview

Project Goal

This project is a software developed within the framework of a Signal Analysis Research Lab. Its primary purpose is to facilitate the connection between laboratory researchers and a High-Performance Computing (HPC) server. This enables the execution of intricate signal processing scripts written in Python, designed to handle substantial volumes of data that would pose significant challenges if run on users' personal computers.

Sorftware Usage

  1. Running the Project Locally:

    • To run this project in a local environment, you only need to have Docker and Docker Compose installed on your machine.
    • Begin by cloning the project repository to a local directory.
    • Next, open a terminal within this directory and enter the command: docker-compose up. This will automatically start the entire architecture, allowing the software to function seamlessly in your local environment.
  2. The FrontEnd provides access to three essential pages:

    • HomePage: This is the system's initial page, serving as a gateway to the other two crucial pages within the system.
    • Upload: On this page, users can upload files containing the tasks they wish the server to process. The structure of these tasks will be detailed later in this document.
    • Results: This page displays the output generated by the tasks submitted by the researchers.
  3. Uploading Jobs to the Server:

    • The file to be sent to the server as a job must be in the form of a compressed .zip directory.
    • This file's structure should, at a minimum, adhere to the following format:
    project-name
    │
    ├── code.py
    ├── requirements.txt
    ├── output
        └── Job Results
    
    • The project-name is used as the unique identifier for the job; hence, it must be distinct and one-of-a-kind.

    • code.py serves as the executable for the researchers' data processing algorithm. Additional files, such as supporting classes, may exist, but these must be invoked by code.py.

    • in order to build the requirements.txt you will have to find out which versions of python libraries your machine is using. To do this, use the following lines of code inside the terminal (put exactly the same output in the requirements.txt file):

      • For linux operating systems or online jupyter notebooks like google colab or Kaggle (use "!" at the beginning of the line of code if you are using online jupyter notebooks)
        pip freeze list | grep <libraryname>
        
      • For windows operatiing systems using powershell
        pip freeze list | Select-String <libraryname>
        
    • The Output folder is where the scripts' results must be deposited. The software will exclusively return items located within this folder as the job results.

  4. To retrieve Job Results:

    • Accessing the Results page will provide you with all the outcomes generated by the jobs submitted to the server.
    • These results will be organized and associated with their respective jobs based on the project-name.

Functional description

This software was developed utilizing a microservices architecture, where each component operates as an independent Docker container, interconnected through a bridge network established and orchestrated by Docker Compose. The system employs a RabbitMQ queue to decouple interactions between users and the processing units on the server.

  1. Frontend

    • Developed using HTML and CSS.
    • Present three pages for the user: Home, Upload e Results. Their functions were explained in the previous section.
    • It is responsible for receiving user jobs, forwarding them to the backend, and also provide job results.
  2. Backend

    • Developed using Python with the Flask framework.
    • User jobs are submitted to the server via a POST endpoint on the backend, which forwards the compressed file to a message producer.
    • Results are obtained through a GET endpoint, which queries a volume containing the job results generated by the server's processing units.
  3. Queue

    • Implemented using RabbitMQ software.
    • Two additional components have been developed to integrate the queue into the system: the Producer and the Consumer.
    • The producer receives compressed user jobs from the backend and converts them into messages, sending them to the queue afterwards.
    • The consumer is linked to the server's processing units, and as soon as they become available, the consumer retrieves a message from the queue. This message is then sent for processing via an HTTP request.
  4. Process Unities

    • Developed as APIs using Flask and embeded shell script
    • There are two distinct processing units, each responsible for a dedicated consumer. These units are capable of handling jobs independently, with a single job processed at a time for each unit.
    • Once the jobs have been executed, the processing units compress the outputs and transfer them to a directory linked to the shared volume. This arrangement allows the backend to gain access to these outputs for user retrieval.

Next Steps

This is a first version of the server, therefore, there is a lot of room to improve and adjust some possible issues.

  • Improvement of architecture to incorporate NFS servers for file transfers, bypassing the message queue, and transmitting only messages.

  • To enable this system to operate across multiple machines and form a cluster, use container orchestrators such as Kubernetes or Docker Slurm.

  • Implementation of a user registration and login system to associate submitted jobs with the respective researchers who initiated them.

About

This project repository contains all files needed to build the NIAS server so far, as well as the instructions on how to use it

Resources

License

Stars

Watchers

Forks

Packages

No packages published