Search results for: correlations
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 823

Search results for: correlations

193 Q Slope Rock Mass Classification and Slope Stability Assessment Methodology Application in Steep Interbedded Sedimentary Rock Slopes for a Motorway Constructed North of Auckland, New Zealand

Authors: Azariah Sosa, Carlos Renedo Sanchez

Abstract:

The development of a new motorway north of Auckland (New Zealand) includes steep rock cuts, from 63 up to 85 degrees, in an interbedded sandstone and siltstone rock mass of the geological unit Waitemata Group (Pakiri Formation), which shows sub-horizontal bedding planes, various sub-vertical joint sets, and a diverse weathering profile. In this kind of rock mass -that can be classified as a weak rock- the definition of the stable maximum geometry is not only governed by discontinuities and defects evident in the rock but is important to also consider the global stability of the rock slope, including (in the analysis) the rock mass characterisation, influence of the groundwater, the geological evolution, and the weathering processes. Depending on the weakness of the rock and the processes suffered, the global stability could, in fact, be a more restricting element than the potential instability of individual blocks through discontinuities. This paper discusses those elements that govern the stability of the rock slopes constructed in a rock formation with favourable bedding and distribution of discontinuities (horizontal and vertical) but with a weak behaviour in terms of global rock mass characterisation. In this context, classifications as Q-Slope and slope stability assessment methodology (SSAM) have been demonstrated as important tools which complement the assessment of the global stability together with the analytical tools related to the wedge-type failures and limit equilibrium methods. The paper focuses on the applicability of these two new empirical classifications to evaluate the slope stability in 18 already excavated rock slopes in the Pakiri formation through comparison between the predicted and observed stability issues and by reviewing the outcome of analytical methods (Rocscience slope stability software suite) compared against the expected stability determined from these rock classifications. This exercise will help validate such findings and correlations arising from the two empirical methods in order to adjust the methods to the nature of this specific kind of rock mass and provide a better understanding of the long-term stability of the slopes studied.

Keywords: Pakiri formation, Q-slope, rock slope stability, SSAM, weak rock

Procedia PDF Downloads 208
192 Representation and Reality: Media Influences on Japanese Attitudes towards China

Authors: Shuk Ting Kinnia Yau

Abstract:

As China has become more and more influential in the global and geo-political arena, mutual understanding between Japan and China has also become a topic of paramount importance. There have always been tensions between the two countries, but unfortunately, each country tends to blame the other for fanning emotions. This research will investigate portrayals of China and the Chinese people in Japanese media such as newspapers, TV news, TV drama, and cinema over this period, focusing on media sources that have particularly wide viewership or readership. By doing so, it attempts to detect any general trends in the positive or negative character of such portrayals and to see if they correlate with the results of surveys of attitudes among the general population. To the degree that correlations may be found, the question arises as to whether the media portrayals are a reflection of societal attitudes towards the Chinese, on one hand, or may be playing a role in promoting such attitudes, on the other. The relationship here is, without doubt, more complex than a simple one-way relationship of cause and effect, but indications of some direction of causality may be suggested by trends in one occurring before or after the other. Evidence will also be sought of possible longer-term trends in media portrayals of China and the Chinese people in Japan during the post-2012 period, i.e., Abe Shinzo’s second term as prime minister, in comparison to earlier periods. Perceptions of Japan’s view of China and the Chinese, both inside and outside the scholarly world, tend to be oversimplified and are often incomprehensive. This research calls attention to the role played by the media in promoting or de-promoting Sino-Japanese relations. By analyzing the nature and background of images of China and the Chinese people presented in the Japanese media, especially under the new Abe Regime, this research seeks to promote a more balanced and comprehensive understanding of attitudes in Japanese society towards its gigantic neighbor. Scholars have seen the increasingly fragile Sino-Japanese relationship as inseparable from the real-world political conflicts that have become more frequent in recent years and have sought to draw a correlation between the two. The influence of the media, however, remains a mostly under-explored domain in the academic world. Against this background, this research aims to provide an enriched scholarly understanding of Japan’s perception of China by investigating to what extent such perception can be seen to be affected by subjective or selective forms of presentation of China found in the Japanese media, or vice versa.

Keywords: Abe Shinzo, China, Japan, media

Procedia PDF Downloads 309
191 Lamb Fleece Quality as an Indicator of Endoparasitism

Authors: Maria Christine Rizzon Cintra, Tâmara Duarte Borges, Cristina Santos Sotomaior

Abstract:

Lamb’s fleece quality can be influenced by many factors, including welfare, stress, nutritional imbalance and presence of ectoparasites. The association of fleece quality and endoparasitism, until now, was not well solved. The present study was undertaken to evaluate if a fleece visual score could predict lamb parasitosis with the focus on gastrointestinal parasites. Fleece quality was scored based on a combination of cleanliness and wool cover, using a three-point scale (1-3). Score 1: fleece shows no sign of dirt or contamination, and had sufficient fleece for the breed and time of year with whole body coverage; Score 2: fleece was little damp or wet, with coat contaminated by small patches of mud or dung and some areas of fleece loose, but no shed or bald patches of no more than 10cm in diameter; Score 3: fleece filthy, very wet with coated in mud or dug, and loose fleece with shed areas of pulls with bald patches greater than 10cm, some areas may be trailing. All fleece quality scores (FQS) were assessed with lamb restrained to ensure close inspection and were done along lamb back and considered just one side of the body. To confirm the gastrointestinal parasites and animal’s anemia, faecal egg counts (FEC) and hematocrit were done for each animal. Lambs were also weighed. All these measurements were done every 15-days, beginning at 60-days until 150-days of life, using 48 animals crossed Texel x Ile de France. For statistics analysis, it was used Stratigraphic Program (4.1. version), and all significant differences between FQS, weight gain, age, hematocrit, and FEC were assessed using analysis of variance following by Duncan test, and the correlation was done by Pearson test at P<0.05. Results showed that animals scored as ‘3’ in FQS had a lower hematocrit and a higher FEC (p<0.05) than animals scored as ‘1’ (hematocrit: 26, 24, 23 and FEC 2107, 2962, 4626 respectively for 1, 2 and 3 FQS). There were correlations between FQS and FEC (r = 0.16), FQS and hematocrit (r = -0.33) an FQS and weight gain (r = -0.20) indicating that worst FQS animals (score 3) had greater gastrointestinal parasites’ infection, were more anemic and had lower weight gain than animals scored as ‘1’ or ‘2’ for FQS. Concerning the lamb´s age, animals that received score ‘3’ in FQS, maintained gastrointestinal parasites’ infection over the time (P<0.05). It was concluded that FQS could be an important indicator to be included in the selective treatment for control verminosis in lambs.

Keywords: fleece, gastrointestinal parasites, sheep, welfare

Procedia PDF Downloads 241
190 Statistical Correlation between Logging-While-Drilling Measurements and Wireline Caliper Logs

Authors: Rima T. Alfaraj, Murtadha J. Al Tammar, Khaqan Khan, Khalid M. Alruwaili

Abstract:

OBJECTIVE/SCOPE (25-75): Caliper logging data provides critical information about wellbore shape and deformations, such as stress-induced borehole breakouts or washouts. Multiarm mechanical caliper logs are often run using wireline, which can be time-consuming, costly, and/or challenging to run in certain formations. To minimize rig time and improve operational safety, it is valuable to develop analytical solutions that can estimate caliper logs using available Logging-While-Drilling (LWD) data without the need to run wireline caliper logs. As a first step, the objective of this paper is to perform statistical analysis using an extensive datasetto identify important physical parameters that should be considered in developing such analytical solutions. METHODS, PROCEDURES, PROCESS (75-100): Caliper logs and LWD data of eleven wells, with a total of more than 80,000 data points, were obtained and imported into a data analytics software for analysis. Several parameters were selected to test the relationship of the parameters with the measured maximum and minimum caliper logs. These parameters includegamma ray, porosity, shear, and compressional sonic velocities, bulk densities, and azimuthal density. The data of the eleven wells were first visualized and cleaned.Using the analytics software, several analyses were then preformed, including the computation of Pearson’s correlation coefficients to show the statistical relationship between the selected parameters and the caliper logs. RESULTS, OBSERVATIONS, CONCLUSIONS (100-200): The results of this statistical analysis showed that some parameters show good correlation to the caliper log data. For instance, the bulk density and azimuthal directional densities showedPearson’s correlation coefficients in the range of 0.39 and 0.57, which wererelatively high when comparedto the correlation coefficients of caliper data with other parameters. Other parameters such as porosity exhibited extremely low correlation coefficients to the caliper data. Various crossplots and visualizations of the data were also demonstrated to gain further insights from the field data. NOVEL/ADDITIVE INFORMATION (25-75): This study offers a unique and novel look into the relative importance and correlation between different LWD measurements and wireline caliper logs via an extensive dataset. The results pave the way for a more informed development of new analytical solutions for estimating the size and shape of the wellbore in real-time while drilling using LWD data.

Keywords: LWD measurements, caliper log, correlations, analysis

Procedia PDF Downloads 121
189 Partial Least Square Regression for High-Dimentional and High-Correlated Data

Authors: Mohammed Abdullah Alshahrani

Abstract:

The research focuses on investigating the use of partial least squares (PLS) methodology for addressing challenges associated with high-dimensional correlated data. Recent technological advancements have led to experiments producing data characterized by a large number of variables compared to observations, with substantial inter-variable correlations. Such data patterns are common in chemometrics, where near-infrared (NIR) spectrometer calibrations record chemical absorbance levels across hundreds of wavelengths, and in genomics, where thousands of genomic regions' copy number alterations (CNA) are recorded from cancer patients. PLS serves as a widely used method for analyzing high-dimensional data, functioning as a regression tool in chemometrics and a classification method in genomics. It handles data complexity by creating latent variables (components) from original variables. However, applying PLS can present challenges. The study investigates key areas to address these challenges, including unifying interpretations across three main PLS algorithms and exploring unusual negative shrinkage factors encountered during model fitting. The research presents an alternative approach to addressing the interpretation challenge of predictor weights associated with PLS. Sparse estimation of predictor weights is employed using a penalty function combining a lasso penalty for sparsity and a Cauchy distribution-based penalty to account for variable dependencies. The results demonstrate sparse and grouped weight estimates, aiding interpretation and prediction tasks in genomic data analysis. High-dimensional data scenarios, where predictors outnumber observations, are common in regression analysis applications. Ordinary least squares regression (OLS), the standard method, performs inadequately with high-dimensional and highly correlated data. Copy number alterations (CNA) in key genes have been linked to disease phenotypes, highlighting the importance of accurate classification of gene expression data in bioinformatics and biology using regularized methods like PLS for regression and classification.

Keywords: partial least square regression, genetics data, negative filter factors, high dimensional data, high correlated data

Procedia PDF Downloads 49
188 The Association of Anthropometric Measurements, Blood Pressure Measurements, and Lipid Profiles with Mental Health Symptoms in University Students

Authors: Ammaarah Gamieldien

Abstract:

Depression is a very common and serious mental illness that has a significant impact on both the social and economic aspects of sufferers worldwide. This study aimed to investigate the association between body mass index (BMI), blood pressure, and lipid profiles with mental health symptoms in university students. Secondary objectives included the associations between the variables (BMI, blood pressure, and lipids) with themselves, as they are key factors in cardiometabolic disease. Sixty-three (63) students participated in the study. Thirty-two (32) were assigned to the control group (minimal-mild depressive symptoms), while 31 were assigned to the depressive group (moderate to severe depressive symptoms). Montgomery-Asberg Depression Rating Scale (MADRS) and Beck Depression Inventory (BDI) were used to assess depressive scores. Anthropometric measurements such as weight (kg), height (m), waist circumference (WC), and hip circumference were measured. Body mass index (BMI) and ratios such as waist-to-hip ratio (WHR) and waist-to-height ratio (WtHR) were also calculated. Blood pressure was measured using an automated AfriMedics blood pressure machine, while lipids were measured using a CardioChek plus analyzer machine. Statistics were analyzed via the SPSS statistics program. There were no significant associations between anthropometric measurements and depressive scores (p > 0.05). There were no significant correlations between lipid profiles and depression when running a Spearman’s rho correlation (P > 0.05). However, total cholesterol and LDL-C were negatively associated with depression, and triglycerides were positively associated with depression after running a point-biserial correlation (P < 0.05). Overall, there were no significant associations between blood pressure measurements and depression (P > 0.05). However, there was a significant moderate positive correlation between systolic blood pressure and MADRS scores in males (P < 0.05). Depressive scores positively and strongly correlated to how long it takes participants to fall asleep. There were also significant associations with regard to the secondary objectives. This study indicates the importance of determining the prevalence of depression among university students in South Africa. If the prevalence and factors associated with depression are addressed, depressive symptoms in university students may be improved.

Keywords: depression, blood pressure, body mass index, lipid profiles, mental health symptoms

Procedia PDF Downloads 62
187 Quantum Coherence Sets the Quantum Speed Limit for Mixed States

Authors: Debasis Mondal, Chandan Datta, S. K. Sazim

Abstract:

Quantum coherence is a key resource like entanglement and discord in quantum information theory. Wigner- Yanase skew information, which was shown to be the quantum part of the uncertainty, has recently been projected as an observable measure of quantum coherence. On the other hand, the quantum speed limit has been established as an important notion for developing the ultra-speed quantum computer and communication channel. Here, we show that both of these quantities are related. Thus, cast coherence as a resource to control the speed of quantum communication. In this work, we address three basic and fundamental questions. There have been rigorous attempts to achieve more and tighter evolution time bounds and to generalize them for mixed states. However, we are yet to know (i) what is the ultimate limit of quantum speed? (ii) Can we measure this speed of quantum evolution in the interferometry by measuring a physically realizable quantity? Most of the bounds in the literature are either not measurable in the interference experiments or not tight enough. As a result, cannot be effectively used in the experiments on quantum metrology, quantum thermodynamics, and quantum communication and especially in Unruh effect detection et cetera, where a small fluctuation in a parameter is needed to be detected. Therefore, a search for the tightest yet experimentally realisable bound is a need of the hour. It will be much more interesting if one can relate various properties of the states or operations, such as coherence, asymmetry, dimension, quantum correlations et cetera and QSL. Although, these understandings may help us to control and manipulate the speed of communication, apart from the particular cases like the Josephson junction and multipartite scenario, there has been a little advancement in this direction. Therefore, the third question we ask: (iii) Can we relate such quantities with QSL? In this paper, we address these fundamental questions and show that quantum coherence or asymmetry plays an important role in setting the QSL. An important question in the study of quantum speed limit may be how it behaves under classical mixing and partial elimination of states. This is because this may help us to choose properly a state or evolution operator to control the speed limit. In this paper, we try to address this question and show that the product of the time bound of the evolution and the quantum part of the uncertainty in energy or quantum coherence or asymmetry of the state with respect to the evolution operator decreases under classical mixing and partial elimination of states.

Keywords: completely positive trace preserving maps, quantum coherence, quantum speed limit, Wigner-Yanase Skew information

Procedia PDF Downloads 353
186 Development and Validation of an Instrument Measuring the Coping Strategies in Situations of Stress

Authors: Lucie Côté, Martin Lauzier, Guy Beauchamp, France Guertin

Abstract:

Stress causes deleterious effects to the physical, psychological and organizational levels, which highlight the need to use effective coping strategies to deal with it. Several coping models exist, but they don’t integrate the different strategies in a coherent way nor do they take into account the new research on the emotional coping and acceptance of the stressful situation. To fill these gaps, an integrative model incorporating the main coping strategies was developed. This model arises from the review of the scientific literature on coping and from a qualitative study carried out among workers with low or high levels of stress, as well as from an analysis of clinical cases. The model allows one to understand under what circumstances the strategies are effective or ineffective and to learn how one might use them more wisely. It includes Specific Strategies in controllable situations (the Modification of the Situation and the Resignation-Disempowerment), Specific Strategies in non-controllable situations (Acceptance and Stubborn Relentlessness) as well as so-called General Strategies (Wellbeing and Avoidance). This study is intended to undertake and present the process of development and validation of an instrument to measure coping strategies based on this model. An initial pool of items has been generated from the conceptual definitions and three expert judges have validated the content. Of these, 18 items have been selected for a short form questionnaire. A sample of 300 students and employees from a Quebec university was used for the validation of the questionnaire. Concerning the reliability of the instrument, the indices observed following the inter-rater agreement (Krippendorff’s alpha) and the calculation of the coefficients for internal consistency (Cronbach's alpha) are satisfactory. To evaluate the construct validity, a confirmatory factor analysis using MPlus supports the existence of a model with six factors. The results of this analysis suggest also that this configuration is superior to other alternative models. The correlations show that the factors are only loosely related to each other. Overall, the analyses carried out suggest that the instrument has good psychometric qualities and demonstrates the relevance of further work to establish predictive validity and reconfirm its structure. This instrument will help researchers and clinicians better understand and assess coping strategies to cope with stress and thus prevent mental health issues.

Keywords: acceptance, coping strategies, stress, validation process

Procedia PDF Downloads 339
185 Potential of Detailed Environmental Data, Produced by Information and Communication Technology Tools, for Better Consideration of Microclimatology Issues in Urban Planning to Promote Active Mobility

Authors: Živa Ravnikar, Alfonso Bahillo Martinez, Barbara Goličnik Marušić

Abstract:

Climate change mitigation has been formally adopted and announced by countries over the globe, where cities are targeting carbon neutrality through various more or less successful, systematic, and fragmentary actions. The article is based on the fact that environmental conditions affect human comfort and the usage of space. Urban planning can, with its sustainable solutions, not only support climate mitigation in terms of a planet reduction of global warming but as well enabling natural processes that in the immediate vicinity produce environmental conditions that encourage people to walk or cycle. However, the article draws attention to the importance of integrating climate consideration into urban planning, where detailed environmental data play a key role, enabling urban planners to improve or monitor environmental conditions on cycle paths. In a practical aspect, this paper tests a particular ICT tool, a prototype used for environmental data. Data gathering was performed along the cycling lanes in Ljubljana (Slovenia), where the main objective was to assess the tool's data applicable value within the planning of comfortable cycling lanes. The results suggest that such transportable devices for in-situ measurements can help a researcher interpret detailed environmental information, characterized by fine granularity and precise data spatial and temporal resolution. Data can be interpreted within human comfort zones, where graphical representation is in the form of a map, enabling the link of the environmental conditions with a spatial context. The paper also provides preliminary results in terms of the potential of such tools for identifying the correlations between environmental conditions and different spatial settings, which can help urban planners to prioritize interventions in places. The paper contributes to multidisciplinary approaches as it demonstrates the usefulness of such fine-grained data for better consideration of microclimatology in urban planning, which is a prerequisite for creating climate-comfortable cycling lanes promoting active mobility.

Keywords: information and communication technology tools, urban planning, human comfort, microclimate, cycling lanes

Procedia PDF Downloads 134
184 Sea of Light: A Game 'Based Approach for Evidence-Centered Assessment of Collaborative Problem Solving

Authors: Svenja Pieritz, Jakab Pilaszanovich

Abstract:

Collaborative Problem Solving (CPS) is recognized as being one of the most important skills of the 21st century with having a potential impact on education, job selection, and collaborative systems design. Therefore, CPS has been adopted in several standardized tests, including the Programme for International Student Assessment (PISA) in 2015. A significant challenge of evaluating CPS is the underlying interplay of cognitive and social skills, which requires a more holistic assessment. However, the majority of the existing tests are using a questionnaire-based assessment, which oversimplifies this interplay and undermines ecological validity. Two major difficulties were identified: Firstly, the creation of a controllable, real-time environment allowing natural behaviors and communication between at least two people. Secondly, the development of an appropriate method to collect and synthesize both cognitive and social metrics of collaboration. This paper proposes a more holistic and automated approach to the assessment of CPS. To address these two difficulties, a multiplayer problem-solving game called Sea of Light was developed: An environment allowing students to deploy a variety of measurable collaborative strategies. This controlled environment enables researchers to monitor behavior through the analysis of game actions and chat. The according solution for the statistical model is a combined approach of Natural Language Processing (NLP) and Bayesian network analysis. Social exchanges via the in-game chat are analyzed through NLP and fed into the Bayesian network along with other game actions. This Bayesian network synthesizes evidence to track and update different subdimensions of CPS. Major findings focus on the correlations between the evidences collected through in- game actions, the participants’ chat features and the CPS self- evaluation metrics. These results give an indication of which game mechanics can best describe CPS evaluation. Overall, Sea of Light gives test administrators control over different problem-solving scenarios and difficulties while keeping the student engaged. It enables a more complete assessment based on complex, socio-cognitive information on actions and communication. This tool permits further investigations of the effects of group constellations and personality in collaborative problem-solving.

Keywords: bayesian network, collaborative problem solving, game-based assessment, natural language processing

Procedia PDF Downloads 132
183 Evaluation of Learning Outcomes, Satisfaction and Self-Assessment of Students as a Change Factor in the Polish Higher Education System

Authors: Teresa Kupczyk, Selçuk Mustafa Özcan, Joanna Kubicka

Abstract:

The paper presents results of specialist literature analysis concerning learning outcomes and student satisfaction as a factor of the necessary change in the Polish higher education system. The objective of the empirical research was to determine students’ assessment of learning outcomes, satisfaction of their expectations, as well as their satisfaction with lectures and practical classes held in the traditional form, e-learning and video-conference. The assessment concerned effectiveness of time spent at classes, usefulness of the delivered knowledge, instructors’ preparation and teaching skills, application of tools, studies curriculum, its adaptation to students’ needs and labour market, as well as studying conditions. Self-assessment of learning outcomes was confronted with assessment by lecturers. The indirect objective of the research was also to identify how students assessed their activity and commitment in acquisition of knowledge and their discipline in achieving education goals. It was analysed how the studies held affected the students’ willingness to improve their skills and assessment of their perspectives at the labour market. To capture the changes underway, the research was held at the beginning, during and after completion of the studies. The study group included 86 students of two editions of full-time studies majoring in Management and specialising in “Mega-event organisation”. The studies were held within the EU-funded project entitled “Responding to challenges of new markets – innovative managerial education”. The results obtained were analysed statistically. Average results and standard deviations were calculated. In order to describe differences between the studied variables present during the process of studies, as well as considering the respondents’ gender, t-Student test for independent samples was performed with the IBM SPSS Statistics 21.0 software package. Correlations between variables were identified by calculation of Pearson and Spearman correlation coefficients. Research results suggest necessity to introduce some changes in the teaching system applied at Polish higher education institutions, not only considering the obtained outcomes, but also impact on students’ willingness to improve their qualifications constantly, improved self-assessment among students and their opportunities at the labour market.

Keywords: higher education, learning outcomes, students, change

Procedia PDF Downloads 240
182 Biological Hotspots in the Galápagos Islands: Exploring Seasonal Trends of Ocean Climate Drivers to Monitor Algal Blooms

Authors: Emily Kislik, Gabriel Mantilla Saltos, Gladys Torres, Mercy Borbor-Córdova

Abstract:

The Galápagos Marine Reserve (GMR) is an internationally-recognized region of consistent upwelling events, high productivity, and rich biodiversity. Despite its high-nutrient, low-chlorophyll condition, the archipelago has experienced phytoplankton blooms, especially in the western section between Isabela and Fernandina Islands. However, little is known about how climate variability will affect future phytoplankton standing stock in the Galápagos, and no consistent protocols currently exist to quantify phytoplankton biomass, identify species, or monitor for potential harmful algal blooms (HABs) within the archipelago. This analysis investigates physical, chemical, and biological oceanic variables that contribute to algal blooms within the GMR, using 4 km Aqua MODIS satellite imagery and 0.125-degree wind stress data from January 2003 to December 2016. Furthermore, this study analyzes chlorophyll-a concentrations at varying spatial scales— within the greater archipelago, as well as within five smaller bioregions based on species biodiversity in the GMR. Seasonal and interannual trend analyses, correlations, and hotspot identification were performed. Results demonstrate that chlorophyll-a is expressed in two seasons throughout the year in the GMR, most frequently in September and March, with a notable hotspot in the Elizabeth Bay bioregion. Interannual chlorophyll-a trend analyses revealed highest peaks in 2003, 2007, 2013, and 2016, and variables that correlate highly with chlorophyll-a include surface temperature and particulate organic carbon. This study recommends future in situ sampling locations for phytoplankton monitoring, including the Elizabeth Bay bioregion. Conclusions from this study contribute to the knowledge of oceanic drivers that catalyze primary productivity and consequently affect species biodiversity within the GMR. Additionally, this research can inform policy and decision-making strategies for species conservation and management within bioregions of the Galápagos.

Keywords: bioregions, ecological monitoring, phytoplankton, remote sensing

Procedia PDF Downloads 265
181 Analyses of Defects in Flexible Silicon Photovoltaic Modules via Thermal Imaging and Electroluminescence

Authors: S. Maleczek, K. Drabczyk, L. Bogdan, A. Iwan

Abstract:

It is known that for industrial applications using solar panel constructed from silicon solar cells require high-efficiency performance. One of the main problems in solar panels is different mechanical and structural defects, causing the decrease of generated power. To analyse defects in solar cells, various techniques are used. However, the thermal imaging is fast and simple method for locating defects. The main goal of this work was to analyze defects in constructed flexible silicon photovoltaic modules via thermal imaging and electroluminescence method. This work is realized for the GEKON project (No. GEKON2/O4/268473/23/2016) sponsored by The National Centre for Research and Development and The National Fund for Environmental Protection and Water Management. Thermal behavior was observed using thermographic camera (VIGOcam v50, VIGO System S.A, Poland) using a DC conventional source. Electroluminescence was observed by Steinbeis Center Photovoltaics (Stuttgart, Germany) equipped with a camera, in which there is a Si-CCD, 16 Mpix detector Kodak KAF-16803type. The camera has a typical spectral response in the range 350 - 1100 nm with a maximum QE of 60 % at 550 nm. In our work commercial silicon solar cells with the size 156 × 156 mm were cut for nine parts (called single solar cells) and used to create photovoltaic modules with the size of 160 × 70 cm (containing about 80 single solar cells). Flexible silicon photovoltaic modules on polyamides or polyester fabric were constructed and investigated taking into consideration anomalies on the surface of modules. Thermal imaging provided evidence of visible voltage-activated conduction. In electro-luminescence images, two regions are noticeable: darker, where solar cell is inactive and brighter corresponding with correctly working photovoltaic cells. The electroluminescence method is non-destructive and gives greater resolution of images thereby allowing a more precise evaluation of microcracks of solar cell after lamination process. Our study showed good correlations between defects observed by thermal imaging and electroluminescence. Finally, we can conclude that the thermographic examination of large scale photovoltaic modules allows us the fast, simple and inexpensive localization of defects at the single solar cells and modules. Moreover, thermographic camera was also useful to detection electrical interconnection between single solar cells.

Keywords: electro-luminescence, flexible devices, silicon solar cells, thermal imaging

Procedia PDF Downloads 315
180 Hybrid Fermentation System for Improvement of Ergosterol Biosynthesis

Authors: Alexandra Tucaliuc, Alexandra C. Blaga, Anca I. Galaction, Lenuta Kloetzer, Dan Cascaval

Abstract:

Ergosterol (ergosta-5,7,22-trien-3β-ol), also known as provitamin D2, is the precursor of vitamin D2 (ergocalciferol), because it is converted under UV radiation to this vitamin. The natural sources of ergosterol are mainly the yeasts (Saccharomyces sp., Candida sp.), but it can be also found in fungus (Claviceps sp.) or plants (orchids). In the yeasts cells, ergosterol is accumulated in membranes, especially in free form in the plasma membrane, but also as esters with fatty acids in membrane lipids. The chemical synthesis of ergosterol does not represent an efficient method for its production, in these circumstances, the most attractive alternative for producing ergosterol at larger-scale remains the aerobic fermentation using S. cerevisiae on glucose or by-products from agriculture of food industry as substrates, in batch or fed-batch operating systems. The aim of this work is to analyze comparatively the influence of aeration efficiency on ergosterol production by S. cerevisiae in batch and fed-batch fermentations, by considering different levels of mixing intensity, aeration rate, and n-dodecane concentration. The effects of the studied factors are quantitatively described by means of the mathematical correlations proposed for each of the two fermentation systems, valid both for the absence and presence of oxygen-vector inside the broth. The experiments were carried out in a laboratory stirred bioreactor, provided with computer-controlled and recorded parameters. n-Dodecane was used as oxygen-vector and the ergosterol content inside the yeasts cells has been considered at the fermentation moment related to the maximum concentration of ergosterol, 9 hrs for batch process and 20 hrs for fed-batch one. Ergosterol biosynthesis is strongly dependent on the dissolved oxygen concentration. The hydrocarbon concentration exhibits a significant influence on ergosterol production mainly by accelerating the oxygen transfer rate. Regardless of n-dodecane addition, by maintaining the glucose concentration at a constant level in the fed-batch process, the amount of ergosterol accumulated into the yeasts cells has been almost tripled. In the presence of hydrocarbon, the ergosterol concentration increased by over 50%. The value of oxygen-vector concentration corresponding to the maximum level of ergosterol depends mainly on biomass concentration, due to its negative influences on broth viscosity and interfacial phenomena of air bubbles blockage through the adsorption of hydrocarbon droplets–yeast cells associations. Therefore, for the batch process, the maximum ergosterol amount was reached for 5% vol. n-dodecane, while for the fed-batch process for 10% vol. hydrocarbon.

Keywords: bioreactors, ergosterol, fermentation, oxygen-vector

Procedia PDF Downloads 188
179 Religious Fundamentalism Prescribes Requirements for Marriage and Reproduction

Authors: Steven M. Graham, Anne V. Magee

Abstract:

Most world religions have sacred texts and traditions that provide instruction about and definitions of marriage, family, and family duties and responsibilities. Given that religious fundamentalism (RF) is defined as the belief that these sacred texts and traditions are literally and completely true to the exclusion of other teachings, RF should be predictive of the attitudes one holds about these topics. The goals of the present research were to: (1) explore the extent to which people think that men and women can be happy without marriage, a significant sexual relationship, a long-term romantic relationship, and having children; (2) determine the extent to which RF is associated with these beliefs; and, (3) to determine how RF is associated with considering certain elements of a relationship to be necessary for thinking of that relationship as a marriage. In Study 1, participants completed a reliable and valid measure of RF and answered questions about the necessity of various elements for a happy life. Higher RF scores were associated with the belief that both men and women require marriage, a sexual relationship, a long-term romantic relationship, and children in order to have a happy life. In Study 2, participants completed these same measures and the pattern of results replicated when controlling for overall religiosity. That is, RF predicted these beliefs over and above religiosity. Additionally, participants indicated the extent to which a variety of characteristics were necessary to consider a particular relationship to be a marriage. Controlling for overall religiosity, higher RF scores were associated with the belief that the following were required to consider a relationship a marriage: religious sanctification, a sexual component, sexual monogamy, emotional monogamy, family approval, children (or the intent to have them), cohabitation, and shared finances. Interestingly, and unexpectedly, higher RF scores were correlated with less importance placed on mutual consent in order to consider a relationship a marriage. RF scores were uncorrelated with the importance placed on legal recognition or lifelong commitment and these null findings do not appear to be attributable to ceiling effects or lack of variability. These results suggest that RF constrains views about both the importance of marriage and family in one’s life and also the characteristics required to consider a relationship a proper marriage. This could have implications for the mental and physical health of believers high in RF, either positive or negative, depending upon the extent to which their lives correspond to these templates prescribed by RF. Additionally, some of these correlations with RF were substantial enough (> .70) that the relevant items could serve as a brief, unobtrusive measure of RF. Future research will investigate these possibilities.

Keywords: attitudes about marriage, fertility intentions, measurement, religious fundamentalism

Procedia PDF Downloads 118
178 Leveraging xAPI in a Corporate e-Learning Environment to Facilitate the Tracking, Modelling, and Predictive Analysis of Learner Behaviour

Authors: Libor Zachoval, Daire O Broin, Oisin Cawley

Abstract:

E-learning platforms, such as Blackboard have two major shortcomings: limited data capture as a result of the limitations of SCORM (Shareable Content Object Reference Model), and lack of incorporation of Artificial Intelligence (AI) and machine learning algorithms which could lead to better course adaptations. With the recent development of Experience Application Programming Interface (xAPI), a large amount of additional types of data can be captured and that opens a window of possibilities from which online education can benefit. In a corporate setting, where companies invest billions on the learning and development of their employees, some learner behaviours can be troublesome for they can hinder the knowledge development of a learner. Behaviours that hinder the knowledge development also raise ambiguity about learner’s knowledge mastery, specifically those related to gaming the system. Furthermore, a company receives little benefit from their investment if employees are passing courses without possessing the required knowledge and potential compliance risks may arise. Using xAPI and rules derived from a state-of-the-art review, we identified three learner behaviours, primarily related to guessing, in a corporate compliance course. The identified behaviours are: trying each option for a question, specifically for multiple-choice questions; selecting a single option for all the questions on the test; and continuously repeating tests upon failing as opposed to going over the learning material. These behaviours were detected on learners who repeated the test at least 4 times before passing the course. These findings suggest that gauging the mastery of a learner from multiple-choice questions test scores alone is a naive approach. Thus, next steps will consider the incorporation of additional data points, knowledge estimation models to model knowledge mastery of a learner more accurately, and analysis of the data for correlations between knowledge development and identified learner behaviours. Additional work could explore how learner behaviours could be utilised to make changes to a course. For example, course content may require modifications (certain sections of learning material may be shown to not be helpful to many learners to master the learning outcomes aimed at) or course design (such as the type and duration of feedback).

Keywords: artificial intelligence, corporate e-learning environment, knowledge maintenance, xAPI

Procedia PDF Downloads 121
177 Spatial Analysis as a Tool to Assess Risk Management in Peru

Authors: Josué Alfredo Tomas Machaca Fajardo, Jhon Elvis Chahua Janampa, Pedro Rau Lavado

Abstract:

A flood vulnerability index was developed for the Piura River watershed in northern Peru using Principal Component Analysis (PCA) to assess flood risk. The official methodology to assess risk from natural hazards in Peru was introduced in 1980 and proved effective for aiding complex decision-making. This method relies in part on decision-makers defining subjective correlations between variables to identify high-risk areas. While risk identification and ensuing response activities benefit from a qualitative understanding of influences, this method does not take advantage of the advent of national and international data collection efforts, which can supplement our understanding of risk. Furthermore, this method does not take advantage of broadly applied statistical methods such as PCA, which highlight central indicators of vulnerability. Nowadays, information processing is much faster and allows for more objective decision-making tools, such as PCA. The approach presented here develops a tool to improve the current flood risk assessment in the Peruvian basin. Hence, the spatial analysis of the census and other datasets provides a better understanding of the current land occupation and a basin-wide distribution of services and human populations, a necessary step toward ultimately reducing flood risk in Peru. PCA allows the simplification of a large number of variables into a few factors regarding social, economic, physical and environmental dimensions of vulnerability. There is a correlation between the location of people and the water availability mainly found in rivers. For this reason, a comprehensive vision of the population location around the river basin is necessary to establish flood prevention policies. The grouping of 5x5 km gridded areas allows the spatial analysis of flood risk rather than assessing political divisions of the territory. The index was applied to the Peruvian region of Piura, where several flood events occurred in recent past years, being one of the most affected regions during the ENSO events in Peru. The analysis evidenced inequalities for the access to basic services, such as water, electricity, internet and sewage, between rural and urban areas.

Keywords: assess risk, flood risk, indicators of vulnerability, principal component analysis

Procedia PDF Downloads 186
176 A Unified Approach for Digital Forensics Analysis

Authors: Ali Alshumrani, Nathan Clarke, Bogdan Ghite, Stavros Shiaeles

Abstract:

Digital forensics has become an essential tool in the investigation of cyber and computer-assisted crime. Arguably, given the prevalence of technology and the subsequent digital footprints that exist, it could have a significant role across almost all crimes. However, the variety of technology platforms (such as computers, mobiles, Closed-Circuit Television (CCTV), Internet of Things (IoT), databases, drones, cloud computing services), heterogeneity and volume of data, forensic tool capability, and the investigative cost make investigations both technically challenging and prohibitively expensive. Forensic tools also tend to be siloed into specific technologies, e.g., File System Forensic Analysis Tools (FS-FAT) and Network Forensic Analysis Tools (N-FAT), and a good deal of data sources has little to no specialist forensic tools. Increasingly it also becomes essential to compare and correlate evidence across data sources and to do so in an efficient and effective manner enabling an investigator to answer high-level questions of the data in a timely manner without having to trawl through data and perform the correlation manually. This paper proposes a Unified Forensic Analysis Tool (U-FAT), which aims to establish a common language for electronic information and permit multi-source forensic analysis. Core to this approach is the identification and development of forensic analyses that automate complex data correlations, enabling investigators to investigate cases more efficiently. The paper presents a systematic analysis of major crime categories and identifies what forensic analyses could be used. For example, in a child abduction, an investigation team might have evidence from a range of sources including computing devices (mobile phone, PC), CCTV (potentially a large number), ISP records, and mobile network cell tower data, in addition to third party databases such as the National Sex Offender registry and tax records, with the desire to auto-correlate and across sources and visualize in a cognitively effective manner. U-FAT provides a holistic, flexible, and extensible approach to providing digital forensics in technology, application, and data-agnostic manner, providing powerful and automated forensic analysis.

Keywords: digital forensics, evidence correlation, heterogeneous data, forensics tool

Procedia PDF Downloads 196
175 The Contribution of Boards to Company Performance via Strategic Management

Authors: Peter Crow

Abstract:

Boards and directors have been subjects of much scholarly research and public interest over several decades, more so since the succession of high profile company failures of the early 2000s. An array of research outputs including information, correlations, descriptions, models, hypotheses and theories have been reported. While some of this research has shed light on aspects of the board–performance relationship and on board tasks and behaviours, the nature and characteristics of the supposed board–performance relationship remain undetermined. That satisfactory explanations of how boards influence company performance have yet to emerge is a significant blind spot. Yet the board is ultimately responsible for company performance, in accordance with the wishes of shareholders. The aim of this paper is to explore corporate governance and board practice through the lens of strategic management, and to take tentative steps towards a new conception of corporate governance. The findings of a recent longitudinal multiple-case study designed to explore the board’s involvement in strategic management are reported. Qualitative and quantitative data was collected from two quasi-public large companies in New Zealand including from first-hand observations of boards in session, semi-structured interviews with chief executives and chairmen and the inspection of company and board documentation. A synthetic timeline framework was used to collate the financial, board structure, board activity and decision-making data, in order to provide a holistic perspective. Decision sequences were identified, and realist techniques of abduction and retroduction were iteratively applied to analyse the multi-year data set. Using several models previously proposed in the literature as a guide, conjectures were formed, tested and refined—the culmination of which was a provisional model of how boards can influence performance via strategic management. The model builds on both existing theoretical perspectives and theoretical models proposed in the corporate governance and strategic management literature. This paper seeks to add to the understanding of how boards can make meaningful contributions to value creation via strategic management, and to comment on the qualities of directors, social interactions in boardrooms and other circumstances within which influence might be possible given the highly contingent relationship between board activity and business performance outcomes.

Keywords: board practice, case study, corporate governance, strategic management

Procedia PDF Downloads 226
174 Forced-Choice Measurement Models of Behavioural, Social, and Emotional Skills: Theory, Research, and Development

Authors: Richard Roberts, Anna Kravtcova

Abstract:

Introduction: The realisation that personality can change over the course of a lifetime has led to a new companion model to the Big Five, the behavioural, emotional, and social skills approach (BESSA). BESSA hypothesizes that this set of skills represents how the individual is thinking, feeling, and behaving when the situation calls for it, as opposed to traits, which represent how someone tends to think, feel, and behave averaged across situations. The five major skill domains share parallels with the Big Five Factor (BFF) model creativity and innovation (openness), self-management (conscientiousness), social engagement (extraversion), cooperation (agreeableness), and emotional resilience (emotional stability) skills. We point to noteworthy limitations in the current operationalisation of BESSA skills (i.e., via Likert-type items) and offer up a different measurement approach: forced choice. Method: In this forced-choice paradigm, individuals were given three skill items (e.g., managing my time) and asked to select one response they believed they were “worst at” and “best at”. The Thurstonian IRT models allow these to be placed on a normative scale. Two multivariate studies (N = 1178) were conducted with a 22-item forced-choice version of the BESSA, a published measure of the BFF, and various criteria. Findings: Confirmatory factor analysis of the forced-choice assessment showed acceptable model fit (RMSEA<0.06), while reliability estimates were reasonable (around 0.70 for each construct). Convergent validity evidence was as predicted (correlations between 0.40 and 0.60 for corresponding BFF and BESSA constructs). Notable was the extent the forced-choice BESSA assessment improved upon test-criterion relationships over and above the BFF. For example, typical regression models find BFF personality accounting for 25% of the variance in life satisfaction scores; both studies showed incremental gains over the BFF exceeding 6% (i.e., BFF and BESSA together accounted for over 31% of the variance in both studies). Discussion: Forced-choice measurement models offer up the promise of creating equated test forms that may unequivocally measure skill gains and are less prone to fakability and reference bias effects. Implications for practitioners are discussed, especially those interested in selection, succession planning, and training and development. We also discuss how the forced choice method can be applied to other constructs like emotional immunity, cross-cultural competence, and self-estimates of cognitive ability.

Keywords: Big Five, forced-choice method, BFF, methods of measurements

Procedia PDF Downloads 94
173 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 164
172 Infestation in Omani Date Palm Orchards by Dubas Bug Is Related to Tree Density

Authors: Lalit Kumar, Rashid Al Shidi

Abstract:

Phoenix dactylifera (date palm) is a major crop in many middle-eastern countries, including Oman. The Dubas bug Ommatissus lybicus is the main pest that affects date palm crops. However not all plantations are infested. It is still uncertain why some plantations get infested while others are not. This research investigated whether tree density and the system of planting (random versus systematic) had any relationship with infestation and levels of infestation. Remote Sensing and Geographic Information Systems were used to determine the density of trees (number of trees per unit area) while infestation levels were determined by manual counting of insects on 40 leaflets from two fronds on each tree, with a total of 20-60 trees in each village. The infestation was recorded as the average number of insects per leaflet. For tree density estimation, WorldView-3 scenes, with eight bands and 2m spatial resolution, were used. The Local maxima method, which depends on locating of the pixel of highest brightness inside a certain exploration window, was used to identify the trees in the image and delineating individual trees. This information was then used to determine whether the plantation was random or systematic. The ordinary least square regression (OLS) was used to test the global correlation between tree density and infestation level and the Geographic Weight Regression (GWR) was used to find the local spatial relationship. The accuracy of detecting trees varied from 83–99% in agricultural lands with systematic planting patterns to 50–70% in natural forest areas. Results revealed that the density of the trees in most of the villages was higher than the recommended planting number (120–125 trees/hectare). For infestation correlations, the GWR model showed a good positive significant relationship between infestation and tree density in the spring season with R² = 0.60 and medium positive significant relationship in the autumn season, with R² = 0.30. In contrast, the OLS model results showed a weaker positive significant relationship in the spring season with R² = 0.02, p < 0.05 and insignificant relationship in the autumn season with R² = 0.01, p > 0.05. The results showed a positive correlation between infestation and tree density, which suggests the infestation severity increased as the density of date palm trees increased. The correlation result showed that the density alone was responsible for about 60% of the increase in the infestation. This information can be used by the relevant authorities to better control infestations as well as to manage their pesticide spraying programs.

Keywords: dubas bug, date palm, tree density, infestation levels

Procedia PDF Downloads 193
171 Performance Evaluation of the CSAN Pronto Point-of-Care Whole Blood Analyzer for Regular Hematological Monitoring During Clozapine Treatment

Authors: Farzana Esmailkassam, Usakorn Kunanuvat, Zahraa Mohammed Ali

Abstract:

Objective: The key barrier in Clozapine treatment of treatment-resistant schizophrenia (TRS) includes frequent bloods draws to monitor neutropenia, the main drug side effect. WBC and ANC monitoring must occur throughout treatment. Accurate WBC and ANC counts are necessary for clinical decisions to halt, modify or continue clozapine treatment. The CSAN Pronto point-of-care (POC) analyzer generates white blood cells (WBC) and absolute neutrophils (ANC) through image analysis of capillary blood. POC monitoring offers significant advantages over central laboratory testing. This study evaluated the performance of the CSAN Pronto against the Beckman DxH900 Hematology laboratory analyzer. Methods: Forty venous samples (EDTA whole blood) with varying concentrations of WBC and ANC as established on the DxH900 analyzer were tested in duplicates on three CSAN Pronto analyzers. Additionally, both venous and capillary samples were concomitantly collected from 20 volunteers and assessed on the CSAN Pronto and the DxH900 analyzer. The analytical performance including precision using liquid quality controls (QCs) as well as patient samples near the medical decision points, and linearity using a mix of high and low patient samples to create five concentrations was also evaluated. Results: In the precision study for QCs and whole blood, WBC and ANC showed CV inside the limits established according to manufacturer and laboratory acceptability standards. WBC and ANC were found to be linear across the measurement range with a correlation of 0.99. WBC and ANC from all analyzers correlated well in venous samples on the DxH900 across the tested sample ranges with a correlation of > 0.95. Mean bias in ANC obtained on the CSAN pronto versus the DxH900 was 0.07× 109 cells/L (95% L.O.A -0.25 to 0.49) for concentrations <4.0 × 109 cells/L, which includes decision-making cut-offs for continuing clozapine treatment. Mean bias in WBC obtained on the CSAN pronto versus the DxH900 was 0.34× 109 cells/L (95% L.O.A -0.13 to 0.72) for concentrations <5.0 × 109 cells/L. The mean bias was higher (-11% for ANC, 5% for WBC) at higher concentrations. The correlations between capillary and venous samples showed more variability with mean bias of 0.20 × 109 cells/L for the ANC. Conclusions: The CSAN pronto showed acceptable performance in WBC and ANC measurements from venous and capillary samples and was approved for clinical use. This testing will facilitate treatment decisions and improve clozapine uptake and compliance.

Keywords: absolute neutrophil counts, clozapine, point of care, white blood cells

Procedia PDF Downloads 94
170 Changes in Skin Microbiome Diversity According to the Age of Xian Women

Authors: Hanbyul Kim, Hye-Jin Kin, Taehun Park, Woo Jun Sul, Susun An

Abstract:

Skin is the largest organ of the human body and can provide the diverse habitat for various microorganisms. The ecology of the skin surface selects distinctive sets of microorganisms and is influenced by both endogenous intrinsic factors and exogenous environmental factors. The diversity of the bacterial community in the skin also depends on multiple host factors: gender, age, health status, location. Among them, age-related changes in skin structure and function are attributable to combinations of endogenous intrinsic factors and exogenous environmental factors. Skin aging is characterized by a decrease in sweat, sebum and the immune functions thus resulting in significant alterations in skin surface physiology including pH, lipid composition, and sebum secretion. The present study gives a comprehensive clue on the variation of skin microbiota and the correlations between ages by analyzing and comparing the metagenome of skin microbiome using Next Generation Sequencing method. Skin bacterial diversity and composition were characterized and compared between two different age groups: younger (20 – 30y) and older (60 - 70y) Xian, Chinese women. A total of 73 healthy women meet two conditions: (I) living in Xian, China; (II) maintaining healthy skin status during the period of this study. Based on Ribosomal Database Project (RDP) database, skin samples of 73 participants were enclosed with ten most abundant genera: Chryseobacterium, Propionibacterium, Enhydrobacter, Staphylococcus and so on. Although these genera are the most predominant genus overall, each genus showed different proportion in each group. The most dominant genus, Chryseobacterium was more present relatively in Young group than in an old group. Similarly, Propionibacterium and Enhydrobacter occupied a higher proportion of skin bacterial composition of the young group. Staphylococcus, in contrast, inhabited more in the old group. The beta diversity that represents the ratio between regional and local species diversity showed significantly different between two age groups. Likewise, The Principal Coordinate Analysis (PCoA) values representing each phylogenetic distance in the two-dimensional framework using the OTU (Operational taxonomic unit) values of the samples also showed differences between the two groups. Thus, our data suggested that the composition and diversification of skin microbiomes in adult women were largely affected by chronological and physiological skin aging.

Keywords: next generation sequencing, age, Xian, skin microbiome

Procedia PDF Downloads 155
169 Life Satisfaction of Non-Luxembourgish and Native Luxembourgish Postgraduate Students

Authors: Chrysoula Karathanasi, Senad Karavdic, Angela Odero, Michèle Baumann

Abstract:

It is not only the economic determinants that impact on life conditions, but maintaining a good level of life satisfaction (LS) may also be an important challenge currently. In Luxembourg, university students receive financial aid from the government. They are then registered at the Centre for Documentation and Information on Higher Education (CEDIES). Luxembourg is built on migration with almost half its population consisting of foreigners. It is upon this basis that our research aims to analyze the associations with mental health factors (health satisfaction, psychological quality of life, worry), perceived financial situation, career attitudes (adaptability, optimism, knowledge, planning) and LS, for non-Luxembourgish and native postgraduate students. Between 2012 and 2013, postgraduates registered at CEDIES were contacted by post and asked to participate in an online survey with either the option of English or French. The study population comprised of 644 respondents. Our statistical analysis excluded: those born abroad who had Luxembourgish citizenship, or those born in Luxembourg who did not have citizenship. Two groups were formed one consisting 147 non-Luxembourgish and the other 284 natives. A single item measured LS (1=not at all satisfied to 10=very satisfied). Bivariate tests, correlations and multiple linear regression models were used in which only significant relationships (p<0.05) were integrated. Among the two groups no differences were found between LS indicators (7.8/10 non-Luxembourgish; 8.0/10 natives) as both were higher than the European indicator of 7.2/10 (for 25-34 years). In the case of non-Luxembourgish students, they were older than natives (29.3 years vs. 26.3 years) perceived their financial situation as more difficult, and a higher percentage of their parents had an education level higher than a Bachelor's degree (father 59.2% vs 44.6% for natives; mother 51.4% vs 33.7% for natives). In addition, the father’s education was related to the LS of postgraduates and the higher was the score, the greater was the contribution to LS. Whereas for native students, when their scores of health satisfaction and career optimism were higher, their LS’ score was higher. For both groups their LS was linked to mental health-related factors, perception of their financial situation, career optimism, adaptability and planning. The higher the psychological quality of life score was, the greater the LS of postgraduates’ was. Good health and positive attitudes related to the job market enhanced their LS indicator.

Keywords: career attributes, father's education level, life satisfaction, mental health

Procedia PDF Downloads 371
168 Unlocking Health Insights: Studying Data for Better Care

Authors: Valentina Marutyan

Abstract:

Healthcare data mining is a rapidly developing field at the intersection of technology and medicine that has the potential to change our understanding and approach to providing healthcare. Healthcare and data mining is the process of examining huge amounts of data to extract useful information that can be applied in order to improve patient care, treatment effectiveness, and overall healthcare delivery. This field looks for patterns, trends, and correlations in a variety of healthcare datasets, such as electronic health records (EHRs), medical imaging, patient demographics, and treatment histories. To accomplish this, it uses advanced analytical approaches. Predictive analysis using historical patient data is a major area of interest in healthcare data mining. This enables doctors to get involved early to prevent problems or improve results for patients. It also assists in early disease detection and customized treatment planning for every person. Doctors can customize a patient's care by looking at their medical history, genetic profile, current and previous therapies. In this way, treatments can be more effective and have fewer negative consequences. Moreover, helping patients, it improves the efficiency of hospitals. It helps them determine the number of beds or doctors they require in regard to the number of patients they expect. In this project are used models like logistic regression, random forests, and neural networks for predicting diseases and analyzing medical images. Patients were helped by algorithms such as k-means, and connections between treatments and patient responses were identified by association rule mining. Time series techniques helped in resource management by predicting patient admissions. These methods improved healthcare decision-making and personalized treatment. Also, healthcare data mining must deal with difficulties such as bad data quality, privacy challenges, managing large and complicated datasets, ensuring the reliability of models, managing biases, limited data sharing, and regulatory compliance. Finally, secret code of data mining in healthcare helps medical professionals and hospitals make better decisions, treat patients more efficiently, and work more efficiently. It ultimately comes down to using data to improve treatment, make better choices, and simplify hospital operations for all patients.

Keywords: data mining, healthcare, big data, large amounts of data

Procedia PDF Downloads 76
167 Facial Behavior Modifications Following the Diffusion of the Use of Protective Masks Due to COVID-19

Authors: Andreas Aceranti, Simonetta Vernocchi, Marco Colorato, Daniel Zaccariello

Abstract:

Our study explores the usefulness of implementing facial expression recognition capabilities and using the Facial Action Coding System (FACS) in contexts where the other person is wearing a mask. In the communication process, the subjects use a plurality of distinct and autonomous reporting systems. Among them, the system of mimicking facial movements is worthy of attention. Basic emotion theorists have identified the existence of specific and universal patterns of facial expressions related to seven basic emotions -anger, disgust, contempt, fear, sadness, surprise, and happiness- that would distinguish one emotion from another. However, due to the COVID-19 pandemic, we have come up against the problem of having the lower half of the face covered and, therefore, not investigable due to the masks. Facial-emotional behavior is a good starting point for understanding: (1) the affective state (such as emotions), (2) cognitive activity (perplexity, concentration, boredom), (3) temperament and personality traits (hostility, sociability, shyness), (4) psychopathology (such as diagnostic information relevant to depression, mania, schizophrenia, and less severe disorders), (5) psychopathological processes that occur during social interactions patient and analyst. There are numerous methods to measure facial movements resulting from the action of muscles, see for example, the measurement of visible facial actions using coding systems (non-intrusive systems that require the presence of an observer who encodes and categorizes behaviors) and the measurement of electrical "discharges" of contracting muscles (facial electromyography; EMG). However, the measuring system invented by Ekman and Friesen (2002) - "Facial Action Coding System - FACS" is the most comprehensive, complete, and versatile. Our study, carried out on about 1,500 subjects over three years of work, allowed us to highlight how the movements of the hands and upper part of the face change depending on whether the subject wears a mask or not. We have been able to identify specific alterations to the subjects’ hand movement patterns and their upper face expressions while wearing masks compared to when not wearing them. We believe that finding correlations between how body language changes when our facial expressions are impaired can provide a better understanding of the link between the face and body non-verbal language.

Keywords: facial action coding system, COVID-19, masks, facial analysis

Procedia PDF Downloads 75
166 Athlete Coping: Personality Dimensions of Recovery from Injury

Authors: Randall E. Osborne, Seth A. Doty

Abstract:

As participation in organized sports increases, so does the risk of sustaining an athletic injury. These unfortunate injuries result in missed time from practice and, inevitably, the field of competition. Recovery time plays a pivotal role in the overall rehabilitation of the athlete. With time and rehabilitation, an athlete’s physical injury can be properly treated. However, there seem to be few measures assessing psychological recovery from injury. Although an athlete has been cleared to return to play, there may still be lingering doubt about their injury. Overall, there is a vast difference between being physically cleared to play and being psychologically ready to return to play. Certain personality traits might serve as predictors of an individual’s rate of psychological recovery from an injury. The purpose of this research study is to explore the correlations between athletes’ personality and their recovery from an athletic injury, specifically, examining how locus of control has been utilized through other studies and can be beneficial to the current study. Additionally, this section will examine the link between hardiness and coping strategies. In the current study, mental toughness is being tested, but it is important to determine the link between these two concepts. Hardiness and coping strategies are closely related and can play a major role in an athlete’s mental toughness. It is important to examine competitive trait anxiety to illustrate perceived anxiety during athletic competition. The Big 5 and Social Support will also be examined in conjunction with recovery from athletic injury. Athletic injury is a devastating and common occurrence that can happen in any sport. Injured athletes often require resources and treatment to be able to return to the field of play. Athletes become more involved with physical and mental treatment as the length of recovery time increases. It is very reasonable to assume that personality traits would be predictive of athlete recovery from injury. The current study investigated the potential relationship between personality traits and recovery time; more specifically, the personality traits of locus of control, hardiness, social support, competitive trait anxiety, and the “Big 5” personality traits. Results indicated that athletes with a higher internal locus of control tend to report being physically ready to return to play and “ready” to return to play faster than those with an external locus of control. Additionally, Openness to Experience (among the Big 5 personality dimensions) was also related to the speed of return to play.

Keywords: athlete, injury, personality, readiness to play, recovery

Procedia PDF Downloads 147
165 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion

Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee

Abstract:

Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.

Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy

Procedia PDF Downloads 241
164 Determination of Burnout Levels and Associated Factors of Teachers Working During the COVID-19 Pandemic Period

Authors: Kemal Kehan, Emine Aktas Bajalan

Abstract:

This study was carried out to determine the burnout levels and related factors of teachers working in primary schools affiliated to the Turkish Republic of Northern Cyprus (TRNC) Ministry of National Education during the COVID-19 pandemic period. The research was conducted in descriptive cross-sectional design. The population of the research consists of 1071 teachers working in 93 primary schools in 6 central districts affiliated to the TRNC Ministry of National Education in the 2021-2022 academic year. When the sample size of the study was calculated by power analysis, it was determined that 202 teachers should be reached with 95% confidence (1-α), 95% test power (1-β) and d=0.5 effect size. Within the scope of the inclusion criteria of the research, the main sample of the study consisted of 300 teachers and the baist random sampling method was used. The data were collected using the Sociodemographic Data Form consisting of 34 questions, including the sociodemographic characteristics of the teachers and the 22-item Maslach Burnout Scale (MBS). The analysis of the data was carried out using descriptive and correlational analyzes in the SPSS 22 package program. In the study, it was determined that 65% of the teachers were women, 68% were married, 84% had a bachelor's degree, 70.33% had children, and 67.67% were dependents. Regarding how teachers evaluate the COVID-19 pandemic period; 90% of them said, “I am worried about my family's health and the risk of infection”, 80% of them, “I feel that my profession does not get the value it deserves”, 75.67% of them mentioned “My hopes for the future have started to wane”, 75.33% of them say “I am worried about my own health”. It was determined that they gave the answer of, “I am worried about the issue”. It was found that the teachers' MBS total score average was 48.63±8.01, the burnout level was moderate, and the average score they got from the sub-dimensions of the scale was also moderate. It has been found that there are negative correlations between the professional satisfaction scores of the teachers during and before the COVID-19 pandemic and the scores they received from the general and sub-dimensions of MBS. It was determined that there was a statistically significant difference (p<0.05) between the scores of teachers diagnosed with COVID-19 from the scale and its sub-dimensions. As a result, it is suggested that social activities should be increased and professional development and promotion opportunities should be offered in order to ensure that teachers are satisfied with their work areas, to reduce their burnout levels or to prevent them completely.

Keywords: teachers, burnout, maslach burnout scale, pandemic, online education

Procedia PDF Downloads 65