LAS System

Laboratory Assistant Suite (LAS) platform assists the researchers in different laboratory activities. Its modular architecture allows managing different kinds of raw data (e.g., biological, molecular) and tracking experimental data. Each LAS module is tailored to handle specific activities or data types, but it is plugged into a broader and uniform framework thus allowing effortless integration with other system’s elements. In addition, the data models and procedures integrated in the platform try to comply with best practices and standards widely adopted by the research community at large. User interfaces are designed to be practical in hostile environments, in which researchers should minimize their interactions with the system during data entry procedures (e.g., in sterile conditions). Furthermore, the platform supports the integration of different resources and aids in performing a variety of analyses in order to extract knowledge related to tumors. The LAS platform is the result of a joint effort by both IT and biomedical researchers of the Candiolo Cancer Institute.

Since the laboratory-related procedures can be categorized into different layers according to data complexity and purpose, the LAS architecture has been modeled following the same rationale. Thus, it has been extensively based on a three-tier design pattern, both at the system-wide and the software module levels. This is a well-established architectural paradigm in software engineering, which targets flexibility and reusability by breaking up an application into tiers. Each tier addresses a specific issue and interacts with the other tiers by means of well-defined interfaces. We modeled the platform in the following tiers: (i) operative, (ii) integration, and (iii) analysis. In addition, a cross-tier software component regulates accesses to the system and enforces user privilege control for all LAS services.

Regarding LAS general architecture, each tier includes a set of fully-fledged applications, or modules. While the lower tier is mainly concerned with the collection of experimental data, the modules and data managed by the upper tiers are characterized by an increasing level of abstraction. Lower tiers can serve requests generated by the upper tiers and provide the data needed to carry out complex tasks (e.g., data integration and/or analysis).

The operative tier is responsible for collecting, storing, and tracking raw experimental data. These include data from several sources, such as tissue collection and biobanking, molecular experiments (e.g., sequencing, microarray), in vivo and in vitro experiments (i.e., xenografts, cell lines) management, each handled by a specific software module. Modules in this tier are meant to work in close interaction with the researchers in a laboratory setting. Thus, graphical user interfaces (GUIs) are explicitly tailored to ease data entry operations and assist the researchers throughout their experiments. The interaction is designed to be especially lean with the aid of special input devices, such as touch-screen notepads and barcode readers.

The integration tier is aimed at integrating different types of raw experimental data by means of complex queries. Ad-hoc identifiers have been adopted throughout the databases, which allow interlinking different biological entities in a unique network. Integrated data can be browsed or visualized as graphs (e.g., genealogy trees). In addition, they can be exploited by the analysis tier and enriched by means of annotations. For instance, a population of samples can be annotated as responsive to a given drug according to statistical analyses or tagged as bearing a genetic alteration based on sequencing data. Moreover, virtual experiments on molecular data can be defined by complex queries and submitted to related operative modules to be managed.

The analysis tier (currently a prototype) is designed to define workflows for the analysis of integrated data. The main idea is to provide a tool to design complex analyses by means of a graphical representation. The analysis process will ultimately generate annotations and it could optionally export data for visualization with external tools. Finally, predefined analysis flows could be exploited by operative modules to provide analyses on data collected by the user during an experiment execution.

The access and privilege control system manages all user accesses to the software modules in each tier, according to their profile. The user profile is defined during user registration and can be updated as needed; it lists the LAS modules accessible by the user, together with the set of functionalities he/she is allowed to use in each module. Moreover, some users with special privileges can create groups of users, based on particular needs (e.g., research studies and/or laboratory activities carried out by a specific group of people). This system also provides a finer-grained control over the data by defining and enforcing user and/or group access privileges with a row-level granularity, in order to guarantee different security levels for confidential information.

System requirements

To ensure full compatibility, we recommend upgrading your browser to the latest version of Google Chrome (

All the operating systems (Windows, Linux, iOS, Android) are supported.

Basic concepts

To properly manage all the data and experimental procedures, the LAS identifies several types of entities and defines a predefined ...
Read More

LAS modules

Since the laboratory-related procedures can be categorized into different layers according to data complexity and purpose, the LAS architecture has ...
Read More


The LAS platform is the result of a joint effort by both IT and biomedical researchers of the Candiolo Cancer ...
Read More