Search results for: principal component
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3090

Search results for: principal component

2490 In-situ Acoustic Emission Analysis of a Polymer Electrolyte Membrane Water Electrolyser

Authors: M. Maier, I. Dedigama, J. Majasan, Y. Wu, Q. Meyer, L. Castanheira, G. Hinds, P. R. Shearing, D. J. L. Brett

Abstract:

Increasing the efficiency of electrolyser technology is commonly seen as one of the main challenges on the way to the Hydrogen Economy. There is a significant lack of understanding of the different states of operation of polymer electrolyte membrane water electrolysers (PEMWE) and how these influence the overall efficiency. This in particular means the two-phase flow through the membrane, gas diffusion layers (GDL) and flow channels. In order to increase the efficiency of PEMWE and facilitate their spread as commercial hydrogen production technology, new analytic approaches have to be found. Acoustic emission (AE) offers the possibility to analyse the processes within a PEMWE in a non-destructive, fast and cheap in-situ way. This work describes the generation and analysis of AE data coming from a PEM water electrolyser, for, to the best of our knowledge, the first time in literature. Different experiments are carried out. Each experiment is designed so that only specific physical processes occur and AE solely related to one process can be measured. Therefore, a range of experimental conditions is used to induce different flow regimes within flow channels and GDL. The resulting AE data is first separated into different events, which are defined by exceeding the noise threshold. Each acoustic event consists of a number of consequent peaks and ends when the wave diminishes under the noise threshold. For all these acoustic events the following key attributes are extracted: maximum peak amplitude, duration, number of peaks, peaks before the maximum, average intensity of a peak and time till the maximum is reached. Each event is then expressed as a vector containing the normalized values for all criteria. Principal Component Analysis is performed on the resulting data, which orders the criteria by the eigenvalues of their covariance matrix. This can be used as an easy way of determining which criteria convey the most information on the acoustic data. In the following, the data is ordered in the two- or three-dimensional space formed by the most relevant criteria axes. By finding spaces in the two- or three-dimensional space only occupied by acoustic events originating from one of the three experiments it is possible to relate physical processes to certain acoustic patterns. Due to the complex nature of the AE data modern machine learning techniques are needed to recognize these patterns in-situ. Using the AE data produced before allows to train a self-learning algorithm and develop an analytical tool to diagnose different operational states in a PEMWE. Combining this technique with the measurement of polarization curves and electrochemical impedance spectroscopy allows for in-situ optimization and recognition of suboptimal states of operation.

Keywords: acoustic emission, gas diffusion layers, in-situ diagnosis, PEM water electrolyser

Procedia PDF Downloads 153
2489 The Forms of Representation in Architectural Design Teaching: The Cases of Politecnico Di Milano and Faculty of Architecture of the University of Porto

Authors: Rafael Sousa Santos, Clara Pimena Do Vale, Barbara Bogoni, Poul Henning Kirkegaard

Abstract:

The representative component, a determining aspect of the architect's training, has been marked by an exponential and unprecedented development. However, the multiplication of possibilities has also multiplied uncertainties about architectural design teaching, and by extension, about the very principles of architectural education. In this paper, it is intended to present the results of a research developed on the following problem: the relation between the forms of representation and the architectural design teaching-learning processes. The research had as its object the educational model of two schools – the Politecnico di Milano (POLIMI) and the Faculty of Architecture of the University of Porto (FAUP) – and was led by three main objectives: to characterize the educational model followed in both schools focused on the representative component and its role; to interpret the relation between forms of representation and the architectural design teaching-learning processes; to consider their possibilities of valorisation. Methodologically, the research was conducted according to a qualitative embedded multiple-case study design. The object – i.e., the educational model – was approached in both POLIMI and FAUP cases considering its Context and three embedded unities of analysis: the educational Purposes, Principles, and Practices. In order to guide the procedures of data collection and analysis, a Matrix for the Characterization (MCC) was developed. As a methodological tool, the MCC allowed to relate the three embedded unities of analysis with the three main sources of evidence where the object manifests itself: the professors, expressing how the model is assumed; the architectural design classes, expressing how the model is achieved; and the students, expressing how the model is acquired. The main research methods used were the naturalistic and participatory observation, in-person-interview and documentary and bibliographic review. The results reveal the importance of the representative component in the educational model of both cases, despite the differences in its role. In POLIMI's model, representation is particularly relevant in the teaching of architectural design, while in FAUP’s model, it plays a transversal role – according to an idea of 'general training through hand drawing'. In fact, the difference between models relative to representation can be partially understood by the level of importance that each gives to hand drawing. Regarding the teaching of architectural design, the two cases are distinguished in the relation with the representative component: while in POLIMI the forms of representation serve essentially an instrumental purpose, in FAUP they tend to be considered also for their methodological dimension. It seems that the possibilities for valuing these models reside precisely in the relation between forms of representation and architectural design teaching. It is expected that the knowledge base developed in this research may have three main contributions: to contribute to the maintenance of the educational model of POLIMI and FAUP; through the precise description of the methodological procedures, to contribute by transferability to similar studies; through the critical and objective framework of the problem underlying the forms of representation and its relation with architectural design teaching, to contribute to the broader discussion concerning the contemporary challenges on architectural education.

Keywords: architectural design teaching, architectural education, educational models, forms of representation

Procedia PDF Downloads 116
2488 Preventive Effect of Locoregional Analgesia Techniques on Chronic Post-Surgical Neuropathic Pain: A Prospective Randomized Study

Authors: Beloulou Mohamed Lamine, Bouhouf Attef, Meliani Walid, Sellami Dalila, Lamara Abdelhak

Abstract:

Introduction: Post-surgical chronic pain (PSCP) is a pathological condition with a rather complex etiopathogenesis that extensively involves sensitization processes and neuronal damage. The neuropathic component of these pains is almost always present, with variable expression depending on the type of surgery. Objective: To assess the presumed beneficial effect of Regional Anesthesia-Analgesia Techniques (RAAT) on the development of post-surgical chronic neuropathic pain (PSCNP) in various surgical procedures. Patients and Methods: A comparative study involving 510 patients distributed across five surgical models (mastectomy, thoracotomy, hernioplasty, cholecystectomy, and major abdominal-pelvic surgery) and randomized into two groups: Group A (240) receiving conventional postoperative analgesia and Group B (270) receiving balanced analgesia, including the implementation of a Regional Anesthesia-Analgesia Technique (RAAT). These patients were longitudinally followed over a 6-month period, with post-surgical chronic neuropathic pain (PSCNP) defined by a Neuropathic Pain Score DN2≥ 3. Comparative measurements through univariate and multivariate analyses were performed to identify associations between the development of PSCNP and certain predictive factors, including the presumed preventive impact (protective effect) of RAAT. Results: At the 6th month post-surgery, 419 patients were analyzed (Group A= 196 and Group B= 223). The incidence of PSCNP was 32.2% (n=135). Among these patients with chronic pain, the prevalence of neuropathic pain was 37.8% (95% CI: [29.6; 46.5]), with n=51/135. It was significantly lower in Group B compared to Group A, with respective percentages of 31.4% vs. 48.8% (p-value = 0.035). The most significant differences were observed in breast and thoracopulmonary surgeries. In a multiple regression analysis, two predictors of PSCNP were identified: the presence of preoperative pain at the surgical site as a risk factor (OR: 3.198; 95% CI [1.326; 7.714]) and RAAT as a protective factor (OR: 0.408; 95% CI [0.173; 0.961]). Conclusion: The neuropathic component of PSCNP can be observed in different types of surgeries. Regional analgesia included in a multimodal approach to postoperative pain management has proven to be effective for acute pain and seems to have a preventive impact on the development of PSCNP and its neuropathic nature or component, particularly in surgeries that are more prone to chronicization.

Keywords: chronic postsurgical pain, postsurgical chronic neuropathic pain, regional anesthesia and analgesia techniques (RAAT), neuropathic pain score dn2, preventive impact

Procedia PDF Downloads 8
2487 A Construction Management Tool: Determining a Project Schedule Typical Behaviors Using Cluster Analysis

Authors: Natalia Rudeli, Elisabeth Viles, Adrian Santilli

Abstract:

Delays in the construction industry are a global phenomenon. Many construction projects experience extensive delays exceeding the initially estimated completion time. The main purpose of this study is to identify construction projects typical behaviors in order to develop a prognosis and management tool. Being able to know a construction projects schedule tendency will enable evidence-based decision-making to allow resolutions to be made before delays occur. This study presents an innovative approach that uses Cluster Analysis Method to support predictions during Earned Value Analyses. A clustering analysis was used to predict future scheduling, Earned Value Management (EVM), and Earned Schedule (ES) principal Indexes behaviors in construction projects. The analysis was made using a database with 90 different construction projects. It was validated with additional data extracted from literature and with another 15 contrasting projects. For all projects, planned and executed schedules were collected and the EVM and ES principal indexes were calculated. A complete linkage classification method was used. In this way, the cluster analysis made considers that the distance (or similarity) between two clusters must be measured by its most disparate elements, i.e. that the distance is given by the maximum span among its components. Finally, through the use of EVM and ES Indexes and Tukey and Fisher Pairwise Comparisons, the statistical dissimilarity was verified and four clusters were obtained. It can be said that construction projects show an average delay of 35% of its planned completion time. Furthermore, four typical behaviors were found and for each of the obtained clusters, the interim milestones and the necessary rhythms of construction were identified. In general, detected typical behaviors are: (1) Projects that perform a 5% of work advance in the first two tenths and maintain a constant rhythm until completion (greater than 10% for each remaining tenth), being able to finish on the initially estimated time. (2) Projects that start with an adequate construction rate but suffer minor delays culminating with a total delay of almost 27% of the planned time. (3) Projects which start with a performance below the planned rate and end up with an average delay of 64%, and (4) projects that begin with a poor performance, suffer great delays and end up with an average delay of a 120% of the planned completion time. The obtained clusters compose a tool to identify the behavior of new construction projects by comparing their current work performance to the validated database, thus allowing the correction of initial estimations towards more accurate completion schedules.

Keywords: cluster analysis, construction management, earned value, schedule

Procedia PDF Downloads 257
2486 Cooperative Learning Promotes Successful Learning. A Qualitative Study to Analyze Factors that Promote Interaction and Cooperation among Students in Blended Learning Environments

Authors: Pia Kastl

Abstract:

Potentials of blended learning are the flexibility of learning and the possibility to get in touch with lecturers and fellow students on site. By combining face-to-face sessions with digital self-learning units, the learning process can be optimized, and learning success increased. To examine wether blended learning outperforms online and face-to-face teaching, a theory-based questionnaire survey was conducted. The results show that the interaction and cooperation among students is poorly provided in blended learning, and face-to-face teaching performs better in this respect. The aim of this article is to identify concrete suggestions students have for improving cooperation and interaction in blended learning courses. For this purpose, interviews were conducted with students from various academic disciplines in face-to-face, online, or blended learning courses (N= 60). The questions referred to opinions and suggestions for improvement regarding the course design of the respective learning environment. The analysis was carried out by qualitative content analysis. The results show that students perceive the interaction as beneficial to their learning. They verbalize their knowledge and are exposed to different perspectives. In addition, emotional support is particularly important in exam phases. Interaction and cooperation were primarily enabled in the face-to-face component of the courses studied, while there was very limited contact with fellow students in the asynchronous component. Forums offered were hardly used or not used at all because the barrier to asking a question publicly is too high, and students prefer private channels for communication. This is accompanied by the disadvantage that the interaction occurs only among people who already know each other. Creating contacts is not fostered in the blended learning courses. Students consider optimization possibilities as a task of the lecturers in the face-to-face sessions: Here, interaction and cooperation should be encouraged through get-to-know-you rounds or group work. It is important here to group the participants randomly to establish contact with new people. In addition, sufficient time for interaction is desired in the lecture, e.g., in the context of discussions or partner work. In the digital component, students prefer synchronous exchange at a fixed time, for example, in breakout rooms or an MS Teams channel. The results provide an overview of how interaction and cooperation can be implemented in blended learning courses. Positive design possibilities are partly dependent on subject area and course. Future studies could tie in here with a course-specific analysis.

Keywords: blended learning, higher education, hybrid teaching, qualitative research, student learning

Procedia PDF Downloads 68
2485 Challenges of School Leadership

Authors: Stefan Ninković

Abstract:

The main purpose of this paper is to examine the different theoretical approaches and relevant empirical evidence and thus, recognize some of the most pressing challenges faced by school leaders. This paper starts from the fact that the new mission of the school is characterized by the need for stronger coordination among students' academic, social and emotional learning. In this sense, school leaders need to focus their commitment, vision and leadership on the issues of students' attitudes, language, cultural and social background, and sexual orientation. More specifically, they should know what a good teaching is for student’s at-risk, students whose first language is not dominant in school, those who’s learning styles are not in accordance with usual teaching styles, or who are stigmatized. There is a rather wide consensus around the fact that the traditionally popular concept of instructional leadership of the school principal is no longer sufficient. However, in a number of "pro-leadership" circles, including certain groups of academic researchers, consultants and practitioners, there is an established tendency of attributing school principal an extraordinary influence towards school achievements. On the other hand, the situation in which all employees in the school are leaders is a utopia par excellence. Although leadership obviously can be efficiently distributed across the school, there are few findings that speak about sources of this distribution and factors making it sustainable. Another idea that is not particularly new, but has only recently gained in importance is related to the fact that the collective capacity of the school is an important resource that often remains under-cultivated. To understand the nature and power of collaborative school cultures, it is necessary to know that these operate in a way that they make their all collective members' tacit knowledge explicit. In this sense, the question is how leaders in schools can shape collaborative culture and create social capital in the school. Pressure exerted on schools to systematically collect and use the data has been accompanied by the need for school leaders to develop new competencies. The role of school leaders is critical in the process of assessing what data are needed and for what purpose. Different types of data are important: test results, data on student’s absenteeism, satisfaction with school, teacher motivation, etc. One of the most important tasks of school leaders are data-driven decision making as well as ensuring transparency of the decision-making process. Finally, the question arises whether the existing models of school leadership are compatible with the current social and economic trends. It is necessary to examine whether and under what conditions schools are in need for forms of leadership that are different from those that currently prevail. Closely related to this issue is also to analyze the adequacy of different approaches to leadership development in the school.

Keywords: educational changes, leaders, leadership, school

Procedia PDF Downloads 333
2484 Magnesium Ameliorates Lipopolysaccharide-Induced Liver Injury in Mice

Authors: D. M. El-Tanbouly, R. M. Abdelsalam, A. S. Attia, M. T. Abdel-Aziz

Abstract:

Lipopolysaccharide (LPS) endotoxin, a component of the outer membrane of Gram-negative bacteria, is involved in the pathogenesis of sepsis. LPS administration induces systemic inflammation that mimics many of the initial clinical features of sepsis and has deleterious effects on several organs including the liver and eventually leading to septic shock and death. The present study aimed to investigate the protective effect of magnesium, a well-known cofactor in many enzymatic reactions and a critical component of the antioxidant system, on hepatic damage associated with LPS induced- endotoxima in mice. Mg (20 and 40 mg/kg, po) was administered for 7 consecutive days. Systemic inflammation was induced one hour after the last dose of Mg by a single dose of LPS (2 mg/kg, ip) and three hours thereafter plasma was separated, animals were sacrificed and their livers were isolated. LPS-treated mice suffered from hepatic dysfunction revealed by histological observation, elevation in plasma transaminases activities, C-reactive protein content and caspase-3, a critical marker of apoptosis. Liver inflammation was evident by elevation in liver cytokines contents (TNF-α and IL-10) and myeloperoxidase (MPO) activity. Additionally, oxidative stress was manifested by increased liver lipoperoxidation, glutathione depletion, elevated total nitrate/nitrite (NOx) content and glutathione peroxidase (GPx) activity. Pretreatment with Mg largely mitigated these alternations through its anti-inflammatory and antioxidant potentials. Mg, therefore, could be regarded as an effective strategy for prevention of liver damage associated with septicemia.

Keywords: LPS, liver damage, magnesium, septicemia

Procedia PDF Downloads 395
2483 Modelling and Control of Binary Distillation Column

Authors: Narava Manose

Abstract:

Distillation is a very old separation technology for separating liquid mixtures that can be traced back to the chemists in Alexandria in the first century A. D. Today distillation is the most important industrial separation technology. By the eleventh century, distillation was being used in Italy to produce alcoholic beverages. At that time, distillation was probably a batch process based on the use of just a single stage, the boiler. The word distillation is derived from the Latin word destillare, which means dripping or trickling down. By at least the sixteenth century, it was known that the extent of separation could be improved by providing multiple vapor-liquid contacts (stages) in a so called Rectifactorium. The term rectification is derived from the Latin words rectefacere, meaning to improve. Modern distillation derives its ability to produce almost pure products from the use of multi-stage contacting. Throughout the twentieth century, multistage distillation was by far the most widely used industrial method for separating liquid mixtures of chemical components.The basic principle behind this technique relies on the different boiling temperatures for the various components of the mixture, allowing the separation between the vapor from the most volatile component and the liquid of other(s) component(s). •Developed a simple non-linear model of a binary distillation column using Skogestad equations in Simulink. •We have computed the steady-state operating point around which to base our analysis and controller design. However, the model contains two integrators because the condenser and reboiler levels are not controlled. One particular way of stabilizing the column is the LV-configuration where we use D to control M_D, and B to control M_B; such a model is given in cola_lv.m where we have used two P-controllers with gains equal to 10.

Keywords: modelling, distillation column, control, binary distillation

Procedia PDF Downloads 272
2482 Microstructural Characterization of Creep Damage Evolution in Welded Inconel 600 Superalloy

Authors: Lourdes Yareth Herrera-Chavez, Alberto Ruiz, Victor H. Lopez

Abstract:

Superalloys are used in components that operate at high temperatures such as pressure vessels and heat exchanger tubing. Design standards for these components must consider creep resistance among other criteria. Fusion welding processes are commonly used in the industry to join such components. Fusion processes commonly generate three distinctive zones, i.e. heat affected zone (HAZ), namely weld metal (WM) and base metal (BM). In nickel-based superalloy, the microstructure developed during fusion welding dictates the mechanical response of the welded component and it is very important to establish these effects in the mechanical response of the component. In this work, two plates of Inconel 600 superalloy were Gas Metal Arc Welded (GMAW). Creep samples were cut and milled to specifications and creep tested at a temperature (650 °C) using stress level of 350, 300, 275, 250 and 200 MPa. Microstructural analysis results showed a progressive creep damage evolution that depends on the stress levels with a preferential accumulation of creep damage at the heat affected zone where the creep rupture preferentially occurs owing to an austenitic matrix with grain boundary precipitated of the type Cr23C6. The fractured surfaces showed dimple patterns of cavity and voids. Results indicated that the damage mechanism is due to cavity growth by the combined effect of the power law and diffusion creep.

Keywords: austenitic microstructure, creep damage evolution, heat affected zone, vickers microhardness

Procedia PDF Downloads 200
2481 Stress Concentration Trend for Combined Loading Conditions

Authors: Aderet M. Pantierer, Shmuel Pantierer, Raphael Cordina, Yougashwar Budhoo

Abstract:

Stress concentration occurs when there is an abrupt change in geometry, a mechanical part under loading. These changes in geometry can include holes, notches, or cracks within the component. The modifications create larger stress within the part. This maximum stress is difficult to determine, as it is directly at the point of the minimum area. Strain gauges have yet to be developed to analyze stresses at such minute areas. Therefore, a stress concentration factor must be utilized. The stress concentration factor is a dimensionless parameter calculated solely on the geometry of a part. The factor is multiplied by the nominal, or average, stress of the component, which can be found analytically or experimentally. Stress concentration graphs exist for common loading conditions and geometrical configurations to aid in the determination of the maximum stress a part can withstand. These graphs were developed from historical data yielded from experimentation. This project seeks to verify a stress concentration graph for combined loading conditions. The aforementioned graph was developed using CATIA Finite Element Analysis software. The results of this analysis will be validated through further testing. The 3D modeled parts will be subjected to further finite element analysis using Patran-Nastran software. The finite element models will then be verified by testing physical specimen using a tensile testing machine. Once the data is validated, the unique stress concentration graph will be submitted for publication so it can aid engineers in future projects.

Keywords: stress concentration, finite element analysis, finite element models, combined loading

Procedia PDF Downloads 435
2480 Using Genetic Algorithm to Organize Sustainable Urban Landscape in Historical Part of City

Authors: Shahab Mirzaean Mahabadi, Elham Ebrahimi

Abstract:

The urban development process in the historical urban context has predominately witnessed two main approaches: the first is the Preservation and conservation of the urban fabric and its value, and the second approach is urban renewal and redevelopment. The latter is generally supported by political and economic aspirations. These two approaches conflict evidently. The authors go through the history of urban planning in order to review the historical development of the mentioned approaches. In this article, various values which are inherent in the historical fabric of a city are illustrated by emphasizing on cultural identity and activity. In the following, it is tried to find an optimized plan which maximizes economic development and minimizes change in historical-cultural sites simultaneously. In the proposed model, regarding the decision maker’s intention, and the variety of functions, the selected zone is divided into a number of components. For each component, different alternatives can be assigned, namely, renovation, refurbishment, destruction, and change in function. The decision Variable in this model is to choose an alternative for each component. A set of decisions made upon all components results in a plan. A plan developed in this way can be evaluated based on the decision maker’s point of view. That is, interactions between selected alternatives can make a foundation for the assessment of urban context to design a historical-cultural landscape. A genetic algorithm (GA) approach is used to search for optimal future land use within the historical-culture landscape for a sustainable high-growth city.

Keywords: urban sustainability, green city, regeneration, genetic algorithm

Procedia PDF Downloads 65
2479 On the Creep of Concrete Structures

Authors: A. Brahma

Abstract:

Analysis of deferred deformations of concrete under sustained load shows that the creep has a leading role on deferred deformations of concrete structures. Knowledge of the creep characteristics of concrete is a Necessary starting point in the design of structures for crack control. Such knowledge will enable the designer to estimate the probable deformation in pre-stressed concrete or reinforced and the appropriate steps can be taken in design to accommodate this movement. In this study, we propose a prediction model that involves the acting principal parameters on the deferred behaviour of concrete structures. For the estimation of the model parameters Levenberg-Marquardt method has proven very satisfactory. A confrontation between the experimental results and the predictions of models designed shows that it is well suited to describe the evolution of the creep of concrete structures.

Keywords: concrete structure, creep, modelling, prediction

Procedia PDF Downloads 287
2478 Bidirectional Pendulum Vibration Absorbers with Homogeneous Variable Tangential Friction: Modelling and Design

Authors: Emiliano Matta

Abstract:

Passive resonant vibration absorbers are among the most widely used dynamic control systems in civil engineering. They typically consist in a single-degree-of-freedom mechanical appendage of the main structure, tuned to one structural target mode through frequency and damping optimization. One classical scheme is the pendulum absorber, whose mass is constrained to move along a curved trajectory and is damped by viscous dashpots. Even though the principle is well known, the search for improved arrangements is still under way. In recent years this investigation inspired a type of bidirectional pendulum absorber (BPA), consisting of a mass constrained to move along an optimal three-dimensional (3D) concave surface. For such a BPA, the surface principal curvatures are designed to ensure a bidirectional tuning of the absorber to both principal modes of the main structure, while damping is produced either by horizontal viscous dashpots or by vertical friction dashpots, connecting the BPA to the main structure. In this paper, a variant of BPA is proposed, where damping originates from the variable tangential friction force which develops between the pendulum mass and the 3D surface as a result of a spatially-varying friction coefficient pattern. Namely, a friction coefficient is proposed that varies along the pendulum surface in proportion to the modulus of the 3D surface gradient. With such an assumption, the dissipative model of the absorber can be proven to be nonlinear homogeneous in the small displacement domain. The resulting homogeneous BPA (HBPA) has a fundamental advantage over conventional friction-type absorbers, because its equivalent damping ratio results independent on the amplitude of oscillations, and therefore its optimal performance does not depend on the excitation level. On the other hand, the HBPA is more compact than viscously damped BPAs because it does not need the installation of dampers. This paper presents the analytical model of the HBPA and an optimal methodology for its design. Numerical simulations of single- and multi-story building structures under wind and earthquake loads are presented to compare the HBPA with classical viscously damped BPAs. It is shown that the HBPA is a promising alternative to existing BPA types and that homogeneous tangential friction is an effective means to realize systems provided with amplitude-independent damping.

Keywords: amplitude-independent damping, homogeneous friction, pendulum nonlinear dynamics, structural control, vibration resonant absorbers

Procedia PDF Downloads 144
2477 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 200
2476 Conceptual and Preliminary Design of Landmine Searching UAS at Extreme Environmental Condition

Authors: Gopalasingam Daisan

Abstract:

Landmines and ammunitions have been creating a significant threat to the people and animals, after the war, the landmines remain in the land and it plays a vital role in civilian’s security. Especially the Children are at the highest risk because they are curious. After all, an unexploded bomb can look like a tempting toy to an inquisitive child. The initial step of designing the UAS (Unmanned Aircraft Systems) for landmine detection is to choose an appropriate and effective sensor to locate the landmines and other unexploded ammunitions. The sensor weight and other components related to the sensor supporting device’s weight are taken as a payload weight. The mission requirement is to find the landmines in a particular area by making a proper path that will cover all the vicinity in the desired area. The weight estimation of the UAV (Unmanned Aerial Vehicle) can be estimated by various techniques discovered previously with good accuracy at the first phase of the design. The next crucial part of the design is to calculate the power requirement and the wing loading calculations. The matching plot techniques are used to determine the thrust-to-weight ratio, and this technique makes this process not only easiest but also precisely. The wing loading can be calculated easily from the stall equation. After these calculations, the wing area is determined from the wing loading equation and the required power is calculated from the thrust to weight ratio calculations. According to the power requirement, an appropriate engine can be selected from the available engine from the market. And the wing geometric parameter is chosen based on the conceptual sketch. The important steps in the wing design to choose proper aerofoil and which will ensure to create sufficient lift coefficient to satisfy the requirements. The next component is the tail; the tail area and other related parameters can be estimated or calculated to counteract the effect of the wing pitching moment. As the vertical tail design depends on many parameters, the initial sizing only can be done in this phase. The fuselage is another major component, which is selected based on the slenderness ratio, and also the shape is determined on the sensor size to fit it under the fuselage. The landing gear is one of the important components which is selected based on the controllability and stability requirements. The minimum and maximum wheel track and wheelbase can be determined based on the crosswind and overturn angle requirements. The minor components of the landing gear design and estimation are not the focus of this project. Another important task is to calculate the weight of the major components and it is going to be estimated using empirical relations and also the mass is added to each such component. The CG and moment of inertia are also determined to each component separately. The sensitivity of the weight calculation is taken into consideration to avoid extra material requirements and also reduce the cost of the design. Finally, the aircraft performance is calculated, especially the V-n (velocity and load factor) diagram for different flight conditions such as not disturbed and with gust velocity.

Keywords: landmine, UAS, matching plot, optimization

Procedia PDF Downloads 168
2475 Compact LWIR Borescope Sensor for Surface Temperature of Engine Components

Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandr, Subodh Adhikari, Paul S. Hsu

Abstract:

The durability of a combustor in gas-turbine enginesrequiresa good control of its component temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system is significantly important to elongatethe lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate 2D surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement, such as thermocouples, thermal wall paints, pyrometry, and phosphors, have shown disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve two-dimensional high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of combustor in gas-turbine engines.

Keywords: borescope, engine, long-wave-infrared, sensor

Procedia PDF Downloads 126
2474 Wrinkling Prediction of Membrane Composite of Varying Orientation under In-Plane Shear

Authors: F. Sabri, J. Jamali

Abstract:

In this article, the wrinkling failure of orthotropic composite membranes due to in-plane shear deformation is investigated using nonlinear finite element analyses. A nonlinear post-buckling analysis is performed to show the evolution of shear-induced wrinkles. The method of investigation is based on the post-buckling finite element analysis adopted from commercial FEM code; ANSYS. The resulting wrinkling patterns, their amplitude and their wavelengths under the prescribed loads and boundary conditions were confirmed by experimental results. Our study reveals that wrinkles develop when both the magnitudes and coverage of the minimum principal stresses in the laminated composite laminates are sufficiently large to trigger wrinkling.

Keywords: composite, FEM, membrane, wrinkling

Procedia PDF Downloads 271
2473 Experimental Study on Bending and Torsional Strength of Bulk Molding Compound Seat Back Frame Part

Authors: Hee Yong Kang, Hyeon Ho Shin, Jung Cheol Yoo, Il Taek Lee, Sung Mo Yang

Abstract:

Lightweight technology using composites is being developed for vehicle seat structures, and its design must meet the safety requirements. According to the Federal Motor Vehicle Safety Standard (FMVSS) 207 seating systems test procedure, the back moment load is applied to the seat back frame structure for the safety evaluation of the vehicle seat. The seat back frame using the composites is divided into three parts: upper part frame, and left- and right-side frame parts following the manufacturing process. When a rear moment load is applied to the seat back frame, the side frame receives the bending load and the torsional load at the same time. This results in the largest loaded strength. Therefore, strength test of the component unit is required. In this study, a component test method based on the FMVSS 207 seating systems test procedure was proposed for the strength analysis of bending load and torsional load of the automotive Bulk Molding Compound (BMC) Seat Back Side Frame. Moreover, strength evaluation according to the carbon band reinforcement was performed. The back-side frame parts of the seat that are applied to the test were manufactured through BMC that is composed of vinyl ester Matrix and short carbon fiber. Then, two kinds of reinforced and non-reinforced parts of carbon band were formed through a high-temperature compression molding process. In addition, the structure that is applied to the component test was constructed by referring to the FMVSS 207. Then, the bending load and the torsional load were applied through the displacement control to perform the strength test for four load conditions. The results of each test are shown through the load-displacement curves of the specimen. The failure strength of the parts caused by the reinforcement of the carbon band was analyzed. Additionally, the fracture characteristics of the parts for four strength tests were evaluated, and the weakness structure of the back-side frame of the seat structure was confirmed according to the test conditions. Through the bending and torsional strength test methods, we confirmed the strength and fracture characteristics of BMC Seat Back Side Frame according to the carbon band reinforcement. And we proposed a method of testing the part strength of a seat back frame for vehicles that can meet the FMVSS 207.

Keywords: seat back frame, bending and torsional strength, BMC (Bulk Molding Compound), FMVSS 207 seating systems

Procedia PDF Downloads 205
2472 A Multi-Layer Based Architecture for the Development of an Open Source CAD/CAM Integration Virtual Platform

Authors: Alvaro Aguinaga, Carlos Avila, Edgar Cando

Abstract:

This article proposes a n-layer architecture, with a web client as a front-end, for the development of a virtual platform for process simulation on CNC machines. This Open-Source platform includes a CAD-CAM interface drawing primitives, and then used to furnish a CNC program that triggers a touch-screen virtual simulator. The objectives of this project are twofold. First one is an educational component that fosters new alternatives for the CAD-CAM/CNC learning process in undergrad and grade schools and technical and technological institutes emphasizing in the development of critical skills, discussion and collaborative work. The second objective puts together a research and technological component that will take the state of the art in CAD-CAM integration to a new level with the development of optimal algorithms and virtual platforms, on-line availability, that will pave the way for the long-term goal of this project, that is, to have a visible and active graduate school in Ecuador and a world wide Open-Innovation community in the area of CAD-CAM integration and operation of CNC machinery. The virtual platform, developed as a part of this study: (1) delivers improved training process of students, (2) creates a multidisciplinary team and a collaborative work space that will push the new generation of students to face future technological challenges, (3) implements industry standards for CAD/CAM, (4) presents a platform for the development of industrial applications. A protoype of this system was developed and implemented in a network of universities and technological institutes in Ecuador.

Keywords: CAD-CAM integration, virtual platforms, CNC machines, multi-layer based architecture

Procedia PDF Downloads 423
2471 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 256
2470 Management Practices in Hypertension: Results of Win-Over-A Pan India Registry

Authors: Abhijit Trailokya, Kamlesh Patel

Abstract:

Background: Hypertension is a common disease seen in clinical practice and is associated with high morbidity and mortality. Many patients require combination therapy for the management of hypertension. Objective: To evaluate co-morbidities, risk factors and management practices of hypertension in Indian population. Material and methods: A total of 1596 hypertensive adult patients received anti-hypertensive medications were studied in a cross-sectional, multi-centric, non-interventional, observational registry. Statistical analysis: Categories or nominal data was expressed as numbers with percentages. Continuous variables were analyzed by descriptive statistics using mean, SD, and range Chi square test was used for in between group comparison. Results: The study included 73.50% males and 26.50% females. Overweight (50.50%) and obesity (30.01%) was common in the hypertensive patients (n=903). A total of 54.76% patients had history of smoking. Alcohol use (33.08%), sedentary life style (32.96%) and history of tobacco chewing (17.92%) were the other lifestyle habits of hypertensive patients. Diabetes (36.03%) and dyslipidemia (39.79%) history was common in these patients. Family history of hypertension and diabetes was seen in 82.21% and 45.99% patients respectively. Most (89.16%) patients were treated with combination of antihypertensive agents. ARBs were the by far most commonly used agents (91.98%) followed by calcium channel blockers (68.23%) and diuretics (60.21%). ARB was the most (80.35%) preferred agent as monotherapy. ARB was also the most common agent as a component of dual therapy, four drug and five drug combinations. Conclusion: Most of the hypertensive patients need combination treatment with antihypertensive agents. ARBs are the most preferred agents as monotherapy for the management of hypertension. ARBs are also very commonly used as a component of combination therapy during hypertension management.

Keywords: antihypertensive, hypertension, management, ARB

Procedia PDF Downloads 520
2469 Electricity Load Modeling: An Application to Italian Market

Authors: Giovanni Masala, Stefania Marica

Abstract:

Forecasting electricity load plays a crucial role regards decision making and planning for economical purposes. Besides, in the light of the recent privatization and deregulation of the power industry, the forecasting of future electricity load turned out to be a very challenging problem. Empirical data about electricity load highlights a clear seasonal behavior (higher load during the winter season), which is partly due to climatic effects. We also emphasize the presence of load periodicity at a weekly basis (electricity load is usually lower on weekends or holidays) and at daily basis (electricity load is clearly influenced by the hour). Finally, a long-term trend may depend on the general economic situation (for example, industrial production affects electricity load). All these features must be captured by the model. The purpose of this paper is then to build an hourly electricity load model. The deterministic component of the model requires non-linear regression and Fourier series while we will investigate the stochastic component through econometrical tools. The calibration of the parameters’ model will be performed by using data coming from the Italian market in a 6 year period (2007- 2012). Then, we will perform a Monte Carlo simulation in order to compare the simulated data respect to the real data (both in-sample and out-of-sample inspection). The reliability of the model will be deduced thanks to standard tests which highlight a good fitting of the simulated values.

Keywords: ARMA-GARCH process, electricity load, fitting tests, Fourier series, Monte Carlo simulation, non-linear regression

Procedia PDF Downloads 392
2468 Escalation of Commitment and Turnover in Top Management Teams

Authors: Dmitriy V. Chulkov

Abstract:

Escalation of commitment is defined as continuation of a project after receiving negative information about it. While literature in management and psychology identified various factors contributing to escalation behavior, this phenomenon has received little analysis in economics, potentially due to the apparent irrationality of escalation. In this study, we present an economic model of escalation with asymmetric information in a principal-agent setup where the agents are responsible for a project selection decision and discover the outcome of the project before the principal. Our theoretical model complements the existing literature on several accounts. First, we link the incentive to escalate commitment to a project with the turnover decision by the manager. When a manager learns the outcome of the project and stops it that reveals that a mistake was made. There is an incentive to continue failing projects and avoid admitting the mistake. This incentive is enhanced when the agent may voluntarily resign from the firm before the outcome of the failing project is revealed, and thus not bear the full extent of reputation damage due to project failure. As long as some successful managers leave the firm for extraneous reasons, outside firms find it difficult to link failing projects with certainty to managers that left a firm. Second, we demonstrate that non-CEO managers have reputation concerns separate from those of the CEO, and thus may escalate commitment to projects they oversee, when such escalation can attenuate damage to reputation from impending project failure. Such incentive for escalation will be present for non-CEO managers if the CEO delegates responsibility for a project to a non-CEO executive. If reputation matters for promotion to the CEO, the incentive for a rising executive to escalate in order to protect reputation is distinct from that of a CEO. Third, our theoretical model is supported by empirical analysis of changes in the firm’s operations measured by the presence of discontinued operations at the time of turnover among the top four members of the top management team. Discontinued operations are indicative of termination of failing projects at a firm. The empirical results demonstrate that in a large dataset of over three thousand publicly traded U.S. firms for a period from 1993 to 2014 turnover by top executives significantly increases the likelihood that the firm discontinues operations. Furthermore, the type of turnover matters as this effect is strongest when at least one non-CEO member of the top management team leaves the firm and when the CEO departure is due to a voluntary resignation and not to a retirement or illness. Empirical results are consistent with the predictions of the theoretical model and suggest that escalation of commitment is primarily observed in decisions by non-CEO members of the top management team.

Keywords: discontinued operations, escalation of commitment, executive turnover, top management teams

Procedia PDF Downloads 363
2467 Implementation of Correlation-Based Data Analysis as a Preliminary Stage for the Prediction of Geometric Dimensions Using Machine Learning in the Forming of Car Seat Rails

Authors: Housein Deli, Loui Al-Shrouf, Hammoud Al Joumaa, Mohieddine Jelali

Abstract:

When forming metallic materials, fluctuations in material properties, process conditions, and wear lead to deviations in the component geometry. Several hundred features sometimes need to be measured, especially in the case of functional and safety-relevant components. These can only be measured offline due to the large number of features and the accuracy requirements. The risk of producing components outside the tolerances is minimized but not eliminated by the statistical evaluation of process capability and control measurements. The inspection intervals are based on the acceptable risk and are at the expense of productivity but remain reactive and, in some cases, considerably delayed. Due to the considerable progress made in the field of condition monitoring and measurement technology, permanently installed sensor systems in combination with machine learning and artificial intelligence, in particular, offer the potential to independently derive forecasts for component geometry and thus eliminate the risk of defective products - actively and preventively. The reliability of forecasts depends on the quality, completeness, and timeliness of the data. Measuring all geometric characteristics is neither sensible nor technically possible. This paper, therefore, uses the example of car seat rail production to discuss the necessary first step of feature selection and reduction by correlation analysis, as otherwise, it would not be possible to forecast components in real-time and inline. Four different car seat rails with an average of 130 features were selected and measured using a coordinate measuring machine (CMM). The run of such measuring programs alone takes up to 20 minutes. In practice, this results in the risk of faulty production of at least 2000 components that have to be sorted or scrapped if the measurement results are negative. Over a period of 2 months, all measurement data (> 200 measurements/ variant) was collected and evaluated using correlation analysis. As part of this study, the number of characteristics to be measured for all 6 car seat rail variants was reduced by over 80%. Specifically, direct correlations for almost 100 characteristics were proven for an average of 125 characteristics for 4 different products. A further 10 features correlate via indirect relationships so that the number of features required for a prediction could be reduced to less than 20. A correlation factor >0.8 was assumed for all correlations.

Keywords: long-term SHM, condition monitoring, machine learning, correlation analysis, component prediction, wear prediction, regressions analysis

Procedia PDF Downloads 37
2466 Wear Resistance and Mechanical Performance of Ultra-High Molecular Weight Polyethylene Influenced by Temperature Change

Authors: Juan Carlos Baena, Zhongxiao Peng

Abstract:

Ultra-high molecular weight polyethylene (UHMWPE) is extensively used in industrial and biomedical fields. The slippery nature of UHMWPE makes this material suitable for surface bearing applications, however, the operational conditions limit the lubrication efficiency, inducing boundary and mixed lubrication in the tribological system. The lack of lubrication in a tribological system intensifies friction, contact stress and consequently, operating temperature. With temperature increase, the material’s mechanical properties are affected, and the lifespan of the component is reduced. The understanding of how mechanical properties and wear performance of UHMWPE change when the temperature is increased has not been clearly identified. The understanding of the wear and mechanical performance of UHMWPE at different temperature is important to predict and further improve the lifespan of these components. This study evaluates the effects of temperature variation in a range of 20 °C to 60 °C on the hardness and the wear resistance of UHMWPE. A reduction of the hardness and wear resistance was observed with the increase in temperature. The variation of the wear rate increased 94.8% when the temperature changed from 20 °C to 50 °C. Although hardness is regarded to be an indicator of the material wear resistance, this study found that wear resistance decreased more rapidly than hardness with the temperature increase, evidencing a low material stability of this component in a short temperature interval. The reduction of the hardness was reflected by the plastic deformation and abrasion intensity, resulting in a significant wear rate increase.

Keywords: hardness, surface bearing, tribological system, UHMWPE, wear

Procedia PDF Downloads 262
2465 Pressure-Controlled Dynamic Equations of the PFC Model: A Mathematical Formulation

Authors: Jatupon Em-Udom, Nirand Pisutha-Arnond

Abstract:

The phase-field-crystal, PFC, approach is a density-functional-type material model with an atomic resolution on a diffusive timescale. Spatially, the model incorporates periodic nature of crystal lattices and can naturally exhibit elasticity, plasticity and crystal defects such as grain boundaries and dislocations. Temporally, the model operates on a diffusive timescale which bypasses the need to resolve prohibitively small atomic-vibration time steps. The PFC model has been used to study many material phenomena such as grain growth, elastic and plastic deformations and solid-solid phase transformations. In this study, the pressure-controlled dynamic equation for the PFC model was developed to simulate a single-component system under externally applied pressure; these coupled equations are important for studies of deformable systems such as those under constant pressure. The formulation is based on the non-equilibrium thermodynamics and the thermodynamics of crystalline solids. To obtain the equations, the entropy variation around the equilibrium point was derived. Then the resulting driving forces and flux around the equilibrium were obtained and rewritten as conventional thermodynamic quantities. These dynamics equations are different from the recently-proposed equations; the equations in this study should provide more rigorous descriptions of the system dynamics under externally applied pressure.

Keywords: driving forces and flux, evolution equation, non equilibrium thermodynamics, Onsager’s reciprocal relation, phase field crystal model, thermodynamics of single-component solid

Procedia PDF Downloads 301
2464 Evaluating Psychosocial Influence of Dental Aesthetics: A Cross-Sectional Study

Authors: Mahjabeen Akbar

Abstract:

Dental aesthetics and its associated psychosocial influence have a significant impact on individuals. Correcting malocclusions is a key motivating factor for majority patients; however, psychosocial factors have been rarely incorporated in evaluating malocclusions. Therefore, it is necessary to study the psychosocial influence of malocclusion in patients. The study aimed to determine the psychosocial influence of dental aesthetics in dental students by the ‘Psychosocial Impact of Dental Aesthetics Questionnaire’ and self-rated Aesthetic Component of the Index of Orthodontic Treatment Need (IOTN). This was a quantitative study using a cross-sectional study design. One hundred twenty dental students (71 females and 49 males; mean age 24.5) were selected via purposive sampling from July to August 2019. Dental students with no former orthodontic treatment were requested to fill out the ‘Psychosocial Impact of Dental Aesthetics Questionnaire.’ Variables including; self-confidence/insecurity, social influence, psychological influence and self-perception of the need of an orthodontic treatment were evaluated by a sequence of statements, while dental aesthetics were evaluated by using the IOTN Aesthetic Component. To determine the significance, the Kruskal-Wallis test was utilized. The results show that all four variables measuring psychosocial impact indicated significant correlations with the perceived malocclusions with a p-value of less than 0.01. The results conclude there is a strong psychological and social influence of altered dental aesthetics on an individual. Moreover, the relationship between the IOTN-AC grading with the psychosocial wellbeing of an individual stands proven, indicating that the perception of altered dental aesthetics is as important as a factor in treatment need as the amount of malocclusion.

Keywords: dental aesthetics, malocclusion, psychosocial influence, dental students

Procedia PDF Downloads 143
2463 Genetic Trait Analysis of RIL Barley Genotypes to Sort-out the Top Ranked Elites for Advanced Yield Breeding Across Multi Environments of Tigray, Ethiopia

Authors: Hailekiros Tadesse Tekle, Yemane Tsehaye, Fetien Abay

Abstract:

Barley (Hordeum vulgare L.) is one of the most important cereal crops in the world, grown for the poor farmers in Tigray with low yield production. The purpose of this research was to estimate the performance of 166 barley genotypes against the quantitative traits with detailed analysis of the variance component, heritability, genetic advance, and genetic usefulness parameters. The finding of ANOVA was highly significant variation (p ≤ 0:01) for all the genotypes. We found significant differences in coefficient of variance (CV of 15%) for 5 traits out of the 12 quantitative traits. The topmost broad sense heritability (H2) was recorded for seeds per spike (98.8%), followed by thousand seed weight (96.5%) with 79.16% and 56.25%, respectively, of GAM. The traits with H2 ≥ 60% and GA/GAM ≥ 20% suggested the least influenced by the environment, governed by the additive genes and direct selection for improvement of such beneficial traits for the studied genotypes. Hence, the 20 outstanding recombinant inbred lines (RIL) barley genotypes performing early maturity, high yield, and 1000 seed weight traits simultaneously were the top ranked group barley genotypes out of the 166 genotypes. These are; G5, G25, G33, G118, G36, G123, G28, G34, G14, G10, G3, G13, G11, G32, G8, G39, G23, G30, G37, and G26. They were early in maturity, high TSW and GYP (TSW ≥ 55 g, GYP ≥ 15.22 g/plant, and DTM below 106 days). In general, the 166 genotypes were classified as high (group 1), medium (group 2), and low yield production (group 3) genotypes in terms of yield and yield component trait analysis by clustering; and genotype parameter analysis such as the heritability, genetic advance, and genetic usefulness traits in this investigation.

Keywords: barley, clustering, genetic advance, heritability, usefulness, variability, yield

Procedia PDF Downloads 80
2462 Contemporary Mexican Shadow Politics: The War on Drugs and the Issue of Security

Authors: Lisdey Espinoza Pedraza

Abstract:

Organised crime in Mexico evolves faster that our capacity to understand and explain it. Organised gangs have become successful entrepreneurs in many ways ad they have somehow mimicked the working ways of the authorities and in many cases, they have successfully infiltrated the governmental spheres. This business model is only possible under a clear scheme of rampant impunity. Impunity, however, is not exclusive to the PRI. Nor the PRI, PAN, or PRD can claim the monopoly of corruption, but what is worse is that none can claim full honesty in their acts either. The current security crisis in Mexico shows a crisis in the Mexican political party system. Corruption today is not only a problem of dishonesty and the correct use of public resources. It is the principal threat to Mexican democracy, governance, and national security.

Keywords: security, war on drugs, drug trafficking, Mexico, Latin America, United States

Procedia PDF Downloads 416
2461 Monitorization of Junction Temperature Using a Thermal-Test-Device

Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles

Abstract:

Due to the higher power loss levels in electronic components, the thermal design of PCBs (Printed Circuit Boards) of an assembled device becomes one of the most important quality factors in electronics. Nonetheless, some of leading causes of the microelectronic component failures are due to higher temperatures, the leakages or thermal-mechanical stress, which is a concern, is the reliability of microelectronic packages. This article presents an experimental approach to measure the junction temperature of exposed pad packages. The implemented solution is in a prototype phase, using a temperature-sensitive parameter (TSP) to measure temperature directly on the die, validating the numeric results provided by the Mechanical APDL (Ansys Parametric Design Language) under same conditions. The physical device-under-test is composed by a Thermal Test Chip (TTC-1002) and assembly in a QFN cavity, soldered to a test-board according to JEDEC Standards. Monitoring the voltage drop across a forward-biased diode, is an indirectly method but accurate to obtain the junction temperature of QFN component with an applied power range between 0,3W to 1.5W. The temperature distributions on the PCB test-board and QFN cavity surface were monitored by an infra-red thermal camera (Goby-384) controlled and images processed by the Xeneth software. The article provides a set-up to monitorize in real-time the junction temperature of ICs, namely devices with the exposed pad package (i.e. QFN). Presenting the PCB layout parameters that the designer should use to improve thermal performance, and evaluate the impact of voids in solder interface in the device junction temperature.

Keywords: quad flat no-Lead packages, exposed pads, junction temperature, thermal management and measurements

Procedia PDF Downloads 282