Saltar al contingut Menu
Map
  • Home
  • Information
  • Contact
  • Map

System Performance Evaluation (CARS)

Credits Dept.
7.5 (6.0 ECTS) AC

Instructors

Person in charge:  (-)
Others:(-)

General goals

Upon finishing this subject, students will have an in-depth understanding of what is required to assess system performance. Furthermore, they will be able to foresee, analyse and synthesise the behaviour of the execution environment (hardware, operating system, network, virtual machines, applications serve, etc.), in order to be able to improve the performance of the applications that are required in our information society.

Specific goals

Knowledges

  1. Learn the common technical environments in which current computing applications are executed, and related advanced technologies.
  2. Learn about current application servers and the role they play in the Information Society.
  3. Learn about system calls and specific commands for obtaining/modifying system parameters (hardware and operating system, network, virtual machine, application server, etc.).
  4. Learn basic modelling and analytical techniques for evaluating system performance.

Abilities

  1. Understand the system structure (at hardware, software, and all other levels) in which certain applications are executed.
  2. Determining the best tools for studying system performance.
  3. Define the information metrics for evaluating system performance.
  4. Understand the relationship between various metrics and estimate other unobserved metrics.
  5. Modifying system parameters in order to harmonise improvements in the system"s performance.

Competences

  1. Ability to work effectively in small groups to solve problems of middling difficulty.
  2. Ability to solve poorly-structured problems.
  3. Ability to summarise both formal and informal knowledge in written and oral form in a way that convinces others.
  4. Ability to understand other lines of reasoning.
  5. Ability to work without disposing of all the information.

Contents

Estimated time (hours):

T P L Alt Ext. L Stu A. time
Theory Problems Laboratory Other activities External Laboratory Study Additional time

1. INTRODUCTION
T      P      L      Alt    Ext. L Stu    A. time Total 
2,0 0 0 0 0 1,0 0 3,0
- Some preliminary concepts.
- Concepts of performance and monitoring.
- Identifying system resources.

2. PERFORMANCE MEASUREMENT TECHNIQUES
T      P      L      Alt    Ext. L Stu    A. time Total 
4,0 0 0 0 0 8,0 0 12,0
Characterize the system based in performance issues.
Introduce some measurement basic concepts such as profiling or program phases, and the kind of information to catch (system data, hardware events or time).
Study the way and more suitable moments to instrument a program (source code, compilation, binary¿), building our own performance tools.

3. PERFORMANCE ANALYSIS TECNIQUES
T      P      L      Alt    Ext. L Stu    A. time Total 
5,0 0 0 0 0 6,0 0 11,0
- The current situation



- Evaluating these applications



- Characterising load



- Web Services and J2EE



- Monitoring J2EE applications

4. Performance evaluation
T      P      L      Alt    Ext. L Stu    A. time Total 
3,0 0 0 0 0 6,0 0 9,0
Distributed systems, composed of different machines that interact amongst
them, present particular performance characteristics. The fact that the
system machines interact, is translated into an overall syste performance
driven by inter-machine communications.



- Operational Analysis



- Analysis and performance improvement

5. SYSTEM MODELLING
T      P      L      Alt    Ext. L Stu    A. time Total 
2,0 0 0 0 0 6,0 0 8,0
(*) This covers two weeks, depending on the term.

6. PRACTICE 0 : INTRODUCTION TO THE SETTING
T      P      L      Alt    Ext. L Stu    A. time Total 
0 0 3,0 0 5,0 0 0 8,0
1- acquaintance with the setting in which practical work takes place.

7. PRACTICE I: SYSTEM MONITORING
T      P      L      Alt    Ext. L Stu    A. time Total 
0 0 12,0 0 8,0 0 0 20,0
2- Extracting information from the system, tracing and parameterisation.
3- Construction of an information extraction model.
4- Preparing a suitable test for a given hardware resource.
5- Simulation evaluation of a resource and a comparison of simulator and real-world results.

8. PRACTICE II : DISTRIBUTED SYSTEMS PERFORMANCE CONFIGURATION
T      P      L      Alt    Ext. L Stu    A. time Total 
0 0 12,0 0 8,0 0 0 20,0
6- Installing and evaluating the main parameters of JVM and a web server.
7- Deployment of an application (J2EE+ WS) and its main indicators.
8- Obtaining the system parameters and correlating them with higher levels.
9- System tuning and configuration changes.

9. PRACTICE III : PRACTICAL APPROACHES TO SYSTEM MODELLING
T      P      L      Alt    Ext. L Stu    A. time Total 
0 0 12,0 0 8,0 0 0 20,0
10- Introduction to QNAP.
11- Studying system performance using QNAP.
12- Solving a practical system case with QNAP.
13- Modelling and solving a Web system with QNAP.

10. BENCHMARKING AND APPLICATION CHARACTERIZATION
T      P      L      Alt    Ext. L Stu    A. time Total 
4,0 0 0 0 0 8,0 0 12,0
The measurement of the performance of a system must be done under realistic
conditions and using real workloads or realistic synthetic workloads.
Choosing the addequate workload to extrapolate the performance of a system
under several conditions is the main objective of benchmarking a system. In
order to create synthetic workloads that mimic realistic conditions, the
real workloads must be studied and characterized.

11. SIMULATION
T      P      L      Alt    Ext. L Stu    A. time Total 
4,0 0 0 0 0 7,0 0 11,0
Introduce the Simulation as a measurement tool.
Use Virtual machine as a way to evaluate a bigger system as well as the behavior of future platforms.


Total per kind T      P      L      Alt    Ext. L Stu    A. time Total 
24,0 0 39,0 0 29,0 42,0 0 134,0
Avaluation additional hours 6,0
Total work hours for student 140,0

Docent Methodolgy

The course is based on students' practical work, which is carried out in the lab sessions and builds on theoretical foundations.

The theoretical foundations of the course will be imparted in the form of lectures given by the teacher.

Students will actively participate in this part of the course by following up the bibliographic references and documentation indicated by the teacher.

The practical part of the course will be imparted in the teaching labs, and employ tutored practical assignments. All practical assignments will be held in the lab hours reserved for the purpose.

Practical assignments will initially consist of experiments the scope of which will be limited to achieving the pre-set objectives. Students will subsequently apply all their acquired knowledge to optimize the solution.

Students must write a report on each of the practical assignments, which will follow the methodology set out above. Use will also be made of virtual learning.

Evaluation Methodgy

The course evaluation will be based on the following four items:

a. Final Theory Exam (EFT)
b. Continuous assessment of Theory (AcT), based on the average between the various controls during the course
c. Lab assessment (NL), comprised of the average grades of the lab practical assignments (NotaPr).
d. Participation Grade (NoP) (Final assignments). At the end of the course each student or team will present a work as a poster, paper or oral presentation to show the expertise acquired at lab, based on a specific architecture, platform, or tool.

NONE of the tests is obligatory. Students will only be awarded a "Not Presented" final grade if they fail to attend all of the tests. The final grade is calculated using the following formulae:

AcT = 1/2*(Control P1) + 1/2*(Control P2)
NT= MAX(AcT, EFT)
NL = 1/num. pract*[(NotaPr 1) + ... + (NotaPr n) ]
NoP= Participation Grade (20%)

NF = (0.40*NL + 0.40*NT+0.20NoP)

Basic Bibliography

  • Professors de l'assignatura Documentació de l'assignatura CARS, FIB, 2005.
  • Xavier Molero, Carlos Juiz, Miguel Jesus Rodeño Evaluación y modelado del rendimiento de los sistemas informáticos, Prentice Hall, 2004.

Complementary Bibliography

  • Samuel Kounev and Alejandro Buchmann Performance Modeling and Evaluation of Large-Scale J2EE Applications, Proceedings of the 29th International Conference of the CMG, 2003.
  • Daniel A. Menascé, Virgilio A.F. Almeida Capacity planning for web services : metrics, models, and methods, Prentice Hall,, 2002.
  • Gian-Paolo D. Musumeci and Mike Loukides System performance tuning, O'Relly, 2002.
  • Stacy Joines, Ruth Willenborg, and Ken Hygh Performance analysis for Java web sites, Addison-Wesley, 2003.

Web links

(no available informacion)

Previous capacities

- Understanding what an OS is and what its main functions are.
- Acquaintance with the internal workings of the OS and management of the transport network layer of applications.
- Understanding the basic elements of a computer"s architecture in order to measure performance and/or extract parameters for system evaluation purposes.
- Understanding Java and its associated technologies.
- Understanding the fundamental concepts regarding distributed applications.
- Acquaintance with the basic communication protocols for distributed applications.


Compartir

 
logo FIB © Barcelona school of informatics - Contact - RSS
This website uses cookies to offer you the best experience and service. If you continue browsing, it is understood that you accept our cookies policy.
Classic version Mobile version