Search results for: linear measurement
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5748

Search results for: linear measurement

198 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education

Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant

Abstract:

In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.

Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs

Procedia PDF Downloads 115
197 Theoretical-Methodological Model to Study Vulnerability of Death in the Past from a Bioarchaeological Approach

Authors: Geraldine G. Granados Vazquez

Abstract:

Every human being is exposed to the risk of dying; wherein some of them are more susceptible than others depending on the cause. Therefore, the cause could be the hazard to die that a group or individual has, making this irreversible damage the condition of vulnerability. Risk is a dynamic concept; which means that it depends on the environmental, social, economic and political conditions. Thus vulnerability may only be evaluated in terms of relative parameters. This research is focusing specifically on building a model that evaluate the risk or propensity of death in past urban societies in connection with the everyday life of individuals, considering that death can be a consequence of two coexisting issues: hazard and the deterioration of the resistance to destruction. One of the most important discussions in bioarchaeology refers to health and life conditions in ancient groups; the researchers are looking for more flexible models that evaluate these topics. In that way, this research proposes a theoretical-methodological model that assess the vulnerability of death in past urban groups. This model pretends to be useful to evaluate the risk of death, considering their sociohistorical context, and their intrinsic biological features. This theoretical and methodological model, propose four areas to assess vulnerability. The first three areas use statistical methods or quantitative analysis. While the last and fourth area, which corresponds to the embodiment, is based on qualitative analysis. The four areas and their techniques proposed are a) Demographic dynamics. From the distribution of age at the time of death, the analysis of mortality will be performed using life tables. From here, four aspects may be inferred: population structure, fertility, mortality-survival, and productivity-migration, b) Frailty. Selective mortality and heterogeneity in frailty can be assessed through the relationship between characteristics and the age at death. There are two indicators used in contemporary populations to evaluate stress: height and linear enamel hypoplasias. Height estimates may account for the individual’s nutrition and health history in specific groups; while enamel hypoplasias are an account of the individual’s first years of life, c) Inequality. Space reflects various sectors of society, also in ancient cities. In general terms, the spatial analysis uses measures of association to show the relationship between frail variables and space, d) Embodiment. The story of everyone leaves some evidence on the body, even in the bones. That led us to think about the dynamic individual's relations in terms of time and space; consequently, the micro analysis of persons will assess vulnerability from the everyday life, where the symbolic meaning also plays a major role. In sum, using some Mesoamerica examples, as study cases, this research demonstrates that not only the intrinsic characteristics related to the age and sex of individuals are conducive to vulnerability, but also the social and historical context that determines their state of frailty before death. An attenuating factor for past groups is that some basic aspects –such as the role they played in everyday life– escape our comprehension, and are still under discussion.

Keywords: bioarchaeology, frailty, Mesoamerica, vulnerability

Procedia PDF Downloads 199
196 The Touch Sensation: Ageing and Gender Influences

Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani

Abstract:

A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.

Keywords: ageing, finger, gender, touch

Procedia PDF Downloads 243
195 The Location of Park and Ride Facilities Using the Fuzzy Inference Model

Authors: Anna Lower, Michal Lower, Robert Masztalski, Agnieszka Szumilas

Abstract:

Contemporary cities are facing serious congestion and parking problems. In urban transport policy the introduction of the park and ride system (P&R) is an increasingly popular way of limiting vehicular traffic. The determining of P&R facilities location is a key aspect of the system. Criteria for assessing the quality of the selected location are formulated generally and descriptively. The research outsourced to specialists are expensive and time consuming. The most focus is on the examination of a few selected places. The practice has shown that the choice of the location of these sites in a intuitive way without a detailed analysis of all the circumstances, often gives negative results. Then the existing facilities are not used as expected. Methods of location as a research topic are also widely taken in the scientific literature. Built mathematical models often do not bring the problem comprehensively, e.g. assuming that the city is linear, developed along one important communications corridor. The paper presents a new method where the expert knowledge is applied to fuzzy inference model. With such a built system even a less experienced person could benefit from it, e.g. urban planners, officials. The analysis result is obtained in a very short time, so a large number of the proposed location can also be verified in a short time. The proposed method is intended for testing of car parks location in a city. The paper will show selected examples of locations of the P&R facilities in cities planning to introduce the P&R. The analysis of existing objects will also be shown in the paper and they will be confronted with the opinions of the system users, with particular emphasis on unpopular locations. The research are executed using the fuzzy inference model which was built and described in more detail in the earlier paper of the authors. The results of analyzes are compared to documents of P&R facilities location outsourced by the city and opinions of existing facilities users expressed on social networking sites. The research of existing facilities were conducted by means of the fuzzy model. The results are consistent with actual users feedback. The proposed method proves to be good, but does not require the involvement of a large experts team and large financial contributions for complicated research. The method also provides an opportunity to show the alternative location of P&R facilities. The performed studies show that the method has been confirmed. The method can be applied in urban planning of the P&R facilities location in relation to the accompanying functions. Although the results of the method are approximate, they are not worse than results of analysis of employed experts. The advantage of this method is ease of use, which simplifies the professional expert analysis. The ability of analyzing a large number of alternative locations gives a broader view on the problem. It is valuable that the arduous analysis of the team of people can be replaced by the model's calculation. According to the authors, the proposed method is also suitable for implementation on a GIS platform.

Keywords: fuzzy logic inference, park and ride system, P&R facilities, P&R location

Procedia PDF Downloads 310
194 A Column Generation Based Algorithm for Airline Cabin Crew Rostering Problem

Authors: Nan Xu

Abstract:

In airlines, the crew scheduling problem is usually decomposed into two stages: crew pairing and crew rostering. In the crew pairing stage, pairings are generated such that each flight is covered by exactly one pairing and the overall cost is minimized. In the crew rostering stage, the pairings generated in the crew pairing stage are combined with off days, training and other breaks to create individual work schedules. The paper focuses on cabin crew rostering problem, which is challenging due to the extremely large size and the complex working rules involved. In our approach, the objective of rostering consists of two major components. The first is to minimize the number of unassigned pairings and the second is to ensure the fairness to crew members. There are two measures of fairness to crew members, the number of overnight duties and the total fly-hour over a given period. Pairings should be assigned to each crew member so that their actual overnight duties and fly hours are as close to the expected average as possible. Deviations from the expected average are penalized in the objective function. Since several small deviations are preferred than a large deviation, the penalization is quadratic. Our model of the airline crew rostering problem is based on column generation. The problem is decomposed into a master problem and subproblems. The mater problem is modeled as a set partition problem and exactly one roster for each crew is picked up such that the pairings are covered. The restricted linear master problem (RLMP) is considered. The current subproblem tries to find columns with negative reduced costs and add them to the RLMP for the next iteration. When no column with negative reduced cost can be found or a stop criteria is met, the procedure ends. The subproblem is to generate feasible crew rosters for each crew member. A separate acyclic weighted graph is constructed for each crew member and the subproblem is modeled as resource constrained shortest path problems in the graph. Labeling algorithm is used to solve it. Since the penalization is quadratic, a method to deal with non-additive shortest path problem using labeling algorithm is proposed and corresponding domination condition is defined. The major contribution of our model is: 1) We propose a method to deal with non-additive shortest path problem; 2) Operation to allow relaxing some soft rules is allowed in our algorithm, which can improve the coverage rate; 3) Multi-thread techniques are used to improve the efficiency of the algorithm when generating Line-of-Work for crew members. Here a column generation based algorithm for the airline cabin crew rostering problem is proposed. The objective is to assign a personalized roster to crew member which minimize the number of unassigned pairings and ensure the fairness to crew members. The algorithm we propose in this paper has been put into production in a major airline in China and numerical experiments show that it has a good performance.

Keywords: aircrew rostering, aircrew scheduling, column generation, SPPRC

Procedia PDF Downloads 122
193 Affordable and Environmental Friendly Small Commuter Aircraft Improving European Mobility

Authors: Diego Giuseppe Romano, Gianvito Apuleo, Jiri Duda

Abstract:

Mobility is one of the most important societal needs for amusement, business activities and health. Thus, transport needs are continuously increasing, with the consequent traffic congestion and pollution increase. Aeronautic effort aims at smarter infrastructures use and in introducing greener concepts. A possible solution to address the abovementioned topics is the development of Small Air Transport (SAT) system, able to guarantee operability from today underused airfields in an affordable and green way, helping meanwhile travel time reduction, too. In the framework of Horizon2020, EU (European Union) has funded the Clean Sky 2 SAT TA (Transverse Activity) initiative to address market innovations able to reduce SAT operational cost and environmental impact, ensuring good levels of operational safety. Nowadays, most of the key technologies to improve passenger comfort and to reduce community noise, DOC (Direct Operating Costs) and pilot workload for SAT have reached an intermediate level of maturity TRL (Technology Readiness Level) 3/4. Thus, the key technologies must be developed, validated and integrated on dedicated ground and flying aircraft demonstrators to reach higher TRL levels (5/6). Particularly, SAT TA focuses on the integration at aircraft level of the following technologies [1]: 1)    Low-cost composite wing box and engine nacelle using OoA (Out of Autoclave) technology, LRI (Liquid Resin Infusion) and advance automation process. 2) Innovative high lift devices, allowing aircraft operations from short airfields (< 800 m). 3) Affordable small aircraft manufacturing of metallic fuselage using FSW (Friction Stir Welding) and LMD (Laser Metal Deposition). 4)       Affordable fly-by-wire architecture for small aircraft (CS23 certification rules). 5) More electric systems replacing pneumatic and hydraulic systems (high voltage EPGDS -Electrical Power Generation and Distribution System-, hybrid de-ice system, landing gear and brakes). 6) Advanced avionics for small aircraft, reducing pilot workload. 7) Advanced cabin comfort with new interiors materials and more comfortable seats. 8) New generation of turboprop engine with reduced fuel consumption, emissions, noise and maintenance costs for 19 seats aircraft. (9) Alternative diesel engine for 9 seats commuter aircraft. To address abovementioned market innovations, two different platforms have been designed: Reference and Green aircraft. Reference aircraft is a virtual aircraft designed considering 2014 technologies with an existing engine assuring requested take-off power; Green aircraft is designed integrating the technologies addressed in Clean Sky 2. Preliminary integration of the proposed technologies shows an encouraging reduction of emissions and operational costs of small: about 20% CO2 reduction, about 24% NOx reduction, about 10 db (A) noise reduction at measurement point and about 25% DOC reduction. Detailed description of the performed studies, analyses and validations for each technology as well as the expected benefit at aircraft level are reported in the present paper.

Keywords: affordable, European, green, mobility, technologies development, travel time reduction

Procedia PDF Downloads 80
192 Influence of the Local External Pressure on Measured Parameters of Cutaneous Microcirculation

Authors: Irina Mizeva, Elena Potapova, Viktor Dremin, Mikhail Mezentsev, Valeri Shupletsov

Abstract:

The local tissue perfusion is regulated by the microvascular tone which is under the control of a number of physiological mechanisms. Laser Doppler flowmetry (LDF) together with wavelet analyses is the most commonly used technique to study the regulatory mechanisms of cutaneous microcirculation. External factors such as temperature, local pressure of the probe on the skin, etc. influence on the blood flow characteristics and are used as physiological tests to evaluate microvascular regulatory mechanisms. Local probe pressure influences on the microcirculation parameters measured by optical methods: diffuse reflectance spectroscopy, fluorescence spectroscopy, and LDF. Therefore, further study of probe pressure effects can be useful to improve the reliability of optical measurement. During pressure tests variation of the mean perfusion measured by means of LDF usually is estimated. An additional information concerning the physiological mechanisms of the vascular tone regulation system in response to local pressure can be obtained using spectral analyses of LDF samples. The aim of the present work was to develop protocol and algorithm of data processing appropriate for study physiological response to the local pressure test. Involving 6 subjects (20±2 years) and providing 5 measurements for every subject we estimated intersubject and-inter group variability of response of both averaged and oscillating parts of the LDF sample on external surface pressure. The final purpose of the work was to find special features which further can be used in wider clinic studies. The cutaneous perfusion measurements were carried out by LAKK-02 (SPE LAZMA Ltd., Russia), the skin loading was provided by the originally designed device which allows one to distribute the pressure around the LDF probe. The probe was installed on the dorsal part of the distal finger of the index figure. We collected measurements continuously for one hour and varied loading from 0 to 180mmHg stepwise with a step duration of 10 minutes. Further, we post-processed the samples using the wavelet transform and traced the energy of oscillations in five frequency bands over time. Weak loading leads to pressure-induced vasodilation, so one should take into account that the perfusion measured under pressure conditions will be overestimated. On the other hand, we revealed a decrease in endothelial associated fluctuations. Further loading (88 mmHg) induces amplification of pulsations in all frequency bands. We assume that such loading leads to a higher number of closed capillaries, higher input of arterioles in the LDF signal and as a consequence more vivid oscillations which mainly are formed in arterioles. External pressure higher than 144 mmHg leads to the decrease of oscillating components, after removing the loading very rapid restore of the tissue perfusion takes place. In this work, we have demonstrated that local skin loading influence on the microcirculation parameters measured by optic technique; this should be taken into account while developing portable electronic devices. The proposed protocol of local loading allows one to evaluate PIV as far as to trace dynamic of blood flow oscillations. This study was supported by the Russian Science Foundation under project N 18-15-00201.

Keywords: blood microcirculation, laser Doppler flowmetry, pressure-induced vasodilation, wavelet analyses blood

Procedia PDF Downloads 126
191 Stent Surface Functionalisation via Plasma Treatment to Promote Fast Endothelialisation

Authors: Irene Carmagnola, Valeria Chiono, Sandra Pacharra, Jochen Salber, Sean McMahon, Chris Lovell, Pooja Basnett, Barbara Lukasiewicz, Ipsita Roy, Xiang Zhang, Gianluca Ciardelli

Abstract:

Thrombosis and restenosis after stenting procedure can be prevented by promoting fast stent wall endothelialisation. It is well known that surface functionalisation with antifouling molecules combining with extracellular matrix proteins is a promising strategy to design biomimetic surfaces able to promote fast endothelialization. In particular, REDV has gained much attention for the ability to enhance rapid endothelialization due to its specific affinity with endothelial cells (ECs). In this work, a two-step plasma treatment was performed to polymerize a thin layer of acrylic acid, used to subsequently graft PEGylated-REDV and polyethylene glycol (PEG) at different molar ratio with the aim to selectively promote endothelial cell adhesion avoiding platelet activation. PEGylate-REDV was provided by Biomatik and it is formed by 6 PEG monomer repetitions (Chempep Inc.), with an NH2 terminal group. PEG polymers were purchased from Chempep Inc. with two different chain lengths: m-PEG6-NH2 (295.4 Da) with 6 monomer repetitions and m-PEG12-NH2 (559.7 Da) with 12 monomer repetitions. Plasma activation was obtained by operating at 50W power, 5 min of treatment and at an Ar flow rate of 20 sccm. Pure acrylic acid (99%, AAc) vapors were diluted in Ar (flow = 20 sccm) and polymerized by a pulsed plasma discharge applying a discharge RF power of 200 W, a duty cycle of 10% (on time = 10 ms, off time = 90 ms) for 10 min. After plasma treatment, samples were dipped into an 1-(3-dimethylaminopropyl)-3- ethylcarbodiimide (EDC)/N-hydroxysuccinimide (NHS) solution (ratio 4:1, pH 5.5) for 1 h at 4°C and subsequently dipped in PEGylate-REDV and PEGylate-REDV:PEG solutions at different molar ratio (100 μg/mL in PBS) for 20 h at room temperature. Surface modification was characterized through physico-chemical analyses and in vitro cell tests. PEGylated-REDV peptide and PEG were successfully bound to the carboxylic groups that are formed on the polymer surface after plasma reaction. FTIR-ATR spectroscopy, X -ray Photoelectron Spectroscopy (XPS) and contact angle measurement gave a clear indication of the presence of the grafted molecules. The use of PEG as a spacer allowed for an increase in wettability of the surface, and the effect was more evident by increasing the amount of PEG. Endothelial cells adhered and spread well on the surfaces functionalized with the REDV sequence. In conclusion, a selective coating able to promote a new endothelial cell layer on polymeric stent surface was developed. In particular, a thin AAc film was polymerised on the polymeric surface in order to expose –COOH groups, and PEGylate-REDV and PEG were successful grafted on the polymeric substrates. The REDV peptide demonstrated to encourage cell adhesion with a consequent, expected improvement of the hemocompatibility of these polymeric surfaces in vivo. Acknowledgements— This work was funded by the European Commission 7th Framework Programme under grant agreement number 604251- ReBioStent (Reinforced Bioresorbable Biomaterials for Therapeutic Drug Eluting Stents). The authors thank all the ReBioStent partners for their support in this work.

Keywords: endothelialisation, plasma treatment, stent, surface functionalisation

Procedia PDF Downloads 285
190 Predictive Analytics for Theory Building

Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim

Abstract:

Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.

Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building

Procedia PDF Downloads 248
189 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 115
188 A Nonlinear Feature Selection Method for Hyperspectral Image Classification

Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo

Abstract:

For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.

Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine

Procedia PDF Downloads 244
187 TRAC: A Software Based New Track Circuit for Traffic Regulation

Authors: Jérôme de Reffye, Marc Antoni

Abstract:

Following the development of the ERTMS system, we think it is interesting to develop another software-based track circuit system which would fit secondary railway lines with an easy-to-work implementation and a low sensitivity to rail-wheel impedance variations. We called this track circuit 'Track Railway by Automatic Circuits.' To be internationally implemented, this system must not have any mechanical component and must be compatible with existing track circuit systems. For example, the system is independent from the French 'Joints Isolants Collés' that isolate track sections from one another, and it is equally independent from component used in Germany called 'Counting Axles,' in French 'compteur d’essieux.' This track circuit is fully interoperable. Such universality is obtained by replacing the train detection mechanical system with a space-time filtering of train position. The various track sections are defined by the frequency of a continuous signal. The set of frequencies related to the track sections is a set of orthogonal functions in a Hilbert Space. Thus the failure probability of track sections separation is precisely calculated on the basis of signal-to-noise ratio. SNR is a function of the level of traction current conducted by rails. This is the reason why we developed a very powerful algorithm to reject noise and jamming to obtain an SNR compatible with the precision required for the track circuit and SIL 4 level. The SIL 4 level is thus reachable by an adjustment of the set of orthogonal functions. Our major contributions to railway engineering signalling science are i) Train space localization is precisely defined by a calibration system. The operation bypasses the GSM-R radio system of the ERTMS system. Moreover, the track circuit is naturally protected against radio-type jammers. After the calibration operation, the track circuit is autonomous. ii) A mathematical topology adapted to train space localization by following the train through a linear time filtering of the received signal. Track sections are numerically defined and can be modified with a software update. The system was numerically simulated, and results were beyond our expectations. We achieved a precision of one meter. Rail-ground and rail-wheel impedance sensitivity analysis gave excellent results. Results are now complete and ready to be published. This work was initialised as a research project of the French Railways developed by the Pi-Ramses Company under SNCF contract and required five years to obtain the results. This track circuit is already at Level 3 of the ERTMS system, and it will be much cheaper to implement and to work. The traffic regulation is based on variable length track sections. As the traffic growths, the maximum speed is reduced, and the track section lengths are decreasing. It is possible if the elementary track section is correctly defined for the minimum speed and if every track section is able to emit with variable frequencies.

Keywords: track section, track circuits, space-time crossing, adaptive track section, automatic railway signalling

Procedia PDF Downloads 311
186 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia

Authors: Andrew D. Henshaw

Abstract:

The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.

Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism

Procedia PDF Downloads 291
185 Predicting Career Adaptability and Optimism among University Students in Turkey: The Role of Personal Growth Initiative and Socio-Demographic Variables

Authors: Yagmur Soylu, Emir Ozeren, Erol Esen, Digdem M. Siyez, Ozlem Belkis, Ezgi Burc, Gülce Demirgurz

Abstract:

The aim of the study is to determine the predictive power of personal growth initiative, socio-demographic variables (such as sex, grade, and working condition) on career adaptability and optimism of bachelor students in Dokuz Eylul University in Turkey. According to career construction theory, career adaptability is viewed as a psychosocial construct, which refers to an individual’s resources for dealing with current and expected tasks, transitions and traumas in their occupational roles. Career optimism is defined as positive results for future career development of individuals in the expectation that it will achieve or to put the emphasis on the positive aspects of the event and feel comfortable about the career planning process. Personal Growth Initiative (PGI) is defined as being proactive about one’s personal development. Additionally, personal growth is defined as the active and intentional engagement in the process of personal. A study conducted on college students revealed that individuals with high self-development orientation make more effort to discover the requirements of the profession and workspaces than individuals with low levels of personal development orientation. University life is a period that social relations and the importance of academic activities are increased, the students make efforts to progress through their career paths and it is also an environment that offers opportunities to students for their self-realization. For these reasons, personal growth initiative is potentially an important variable which has a key role for an individual during the transition phase from university to the working life. Based on the review of the literature, it is expected that individual’s personal growth initiative, sex, grade, and working condition would significantly predict one’s career adaptability. In the relevant literature, it can be seen that there are relatively few studies available on the career adaptability and optimism of university students. Most of the existing studies have been carried out with limited respondents. In this study, the authors aim to conduct a comprehensive research with a large representative sample of bachelor students in Dokuz Eylul University, Izmir, Turkey. By now, personal growth initiative and career development constructs have been predominantly discussed in western contexts where individualistic tendencies are likely to be seen. Thus, the examination of the same relationship within the context of Turkey where collectivistic cultural characteristics can be more observed is expected to offer valuable insights and provide an important contribution to the literature. The participants in this study were comprised of 1500 undergraduate students being included from thirteen faculties in Dokuz Eylul University. Stratified and random sampling methods were adopted for the selection of the participants. The Personal Growth Initiative Scale-II and Career Futures Inventory were used as the major measurement tools. In data analysis stage, several statistical analysis concerning the regression analysis, one-way ANOVA and t-test will be conducted to reveal the relationships of the constructs under investigation. At the end of this project, we will be able to determine the level of career adaptability and optimism of university students at varying degrees so that a fertile ground is likely to be created to carry out several intervention techniques to make a contribution to an emergence of a healthier and more productive youth generation in psycho-social sense.

Keywords: career optimism, career adaptability, personal growth initiative, university students

Procedia PDF Downloads 389
184 Comparative Appraisal of Polymeric Matrices Synthesis and Characterization Based on Maleic versus Itaconic Anhydride and 3,9-Divinyl-2,4,8,10-Tetraoxaspiro[5.5]-Undecane

Authors: Iordana Neamtu, Aurica P. Chiriac, Loredana E. Nita, Mihai Asandulesa, Elena Butnaru, Nita Tudorachi, Alina Diaconu

Abstract:

In the last decade, the attention of many researchers is focused on the synthesis of innovative “intelligent” copolymer structures with great potential for different uses. This considerable scientific interest is stimulated by possibility of the significant improvements in physical, mechanical, thermal and other important specific properties of these materials. Functionalization of polymer in synthesis by designing a suitable composition with the desired properties and applications is recognized as a valuable tool. In this work is presented a comparative study of the properties of the new copolymers poly(maleic anhydride maleic-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) and poly(itaconic-anhydride-co-3,9-divinyl-2,4,8,10-tetraoxaspiro[5.5]undecane) obtained by radical polymerization in dioxane, using 2,2′-azobis(2-methylpropionitrile) as free-radical initiator. The comonomers are able for generating special effects as for example network formation, biodegradability and biocompatibility, gel formation capacity, binding properties, amphiphilicity, good oxidative and thermal stability, good film formers, and temperature and pH sensitivity. Maleic anhydride (MA) and also the isostructural analog itaconic anhydride (ITA) as polyfunctional monomers are widely used in the synthesis of reactive macromolecules with linear, hyperbranched and self & assembled structures to prepare high performance engineering, bioengineering and nano engineering materials. The incorporation of spiroacetal groups in polymer structures improves the solubility and the adhesive properties, induce good oxidative and thermal stability, are formers of good fiber or films with good flexibility and tensile strength. Also, the spiroacetal rings induce interactions on ether oxygen such as hydrogen bonds or coordinate bonds with other functional groups determining bulkiness and stiffness. The synthesized copolymers are analyzed by DSC, oscillatory and rotational rheological measurements and dielectric spectroscopy with the aim of underlying the heating behavior, solution viscosity as a function of shear rate and temperature and to investigate the relaxation processes and the motion of functional groups present in side chain around the main chain or bonds of the side chain. Acknowledgments This work was financially supported by the grant of the Romanian National Authority for Scientific Research, CNCS-UEFISCDI, project number PN-II-132/2014 “Magnetic biomimetic supports as alternative strategy for bone tissue engineering and repair’’ (MAGBIOTISS).

Keywords: Poly(maleic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); Poly(itaconic anhydride-co-3, 9-divinyl-2, 4, 8, 10-tetraoxaspiro (5.5)undecane); DSC; oscillatory and rotational rheological analysis; dielectric spectroscopy

Procedia PDF Downloads 204
183 Forecasting Thermal Energy Demand in District Heating and Cooling Systems Using Long Short-Term Memory Neural Networks

Authors: Kostas Kouvaris, Anastasia Eleftheriou, Georgios A. Sarantitis, Apostolos Chondronasios

Abstract:

To achieve the objective of almost zero carbon energy solutions by 2050, the EU needs to accelerate the development of integrated, highly efficient and environmentally friendly solutions. In this direction, district heating and cooling (DHC) emerges as a viable and more efficient alternative to conventional, decentralized heating and cooling systems, enabling a combination of more efficient renewable and competitive energy supplies. In this paper, we develop a forecasting tool for near real-time local weather and thermal energy demand predictions for an entire DHC network. In this fashion, we are able to extend the functionality and to improve the energy efficiency of the DHC network by predicting and adjusting the heat load that is distributed from the heat generation plant to the connected buildings by the heat pipe network. Two case-studies are considered; one for Vransko, Slovenia and one for Montpellier, France. The data consists of i) local weather data, such as humidity, temperature, and precipitation, ii) weather forecast data, such as the outdoor temperature and iii) DHC operational parameters, such as the mass flow rate, supply and return temperature. The external temperature is found to be the most important energy-related variable for space conditioning, and thus it is used as an external parameter for the energy demand models. For the development of the forecasting tool, we use state-of-the-art deep neural networks and more specifically, recurrent networks with long-short-term memory cells, which are able to capture complex non-linear relations among temporal variables. Firstly, we develop models to forecast outdoor temperatures for the next 24 hours using local weather data for each case-study. Subsequently, we develop models to forecast thermal demand for the same period, taking under consideration past energy demand values as well as the predicted temperature values from the weather forecasting models. The contributions to the scientific and industrial community are three-fold, and the empirical results are highly encouraging. First, we are able to predict future thermal demand levels for the two locations under consideration with minimal errors. Second, we examine the impact of the outdoor temperature on the predictive ability of the models and how the accuracy of the energy demand forecasts decreases with the forecast horizon. Third, we extend the relevant literature with a new dataset of thermal demand and examine the performance and applicability of machine learning techniques to solve real-world problems. Overall, the solution proposed in this paper is in accordance with EU targets, providing an automated smart energy management system, decreasing human errors and reducing excessive energy production.

Keywords: machine learning, LSTMs, district heating and cooling system, thermal demand

Procedia PDF Downloads 116
182 Associations Between Pornography Use Motivations and Sexual Satisfaction in Gender Diverse and Cisgender Individuals in the 43-Country International Sex Survey

Authors: Aurélie Michaud, Émilie Gaudet, Mónika Koós, Léna Nagy, Zsolt Demetrovics, Shane W. Kraus, Marc N. Potenza, Beáta Bőthe

Abstract:

Pornography use is prevalent among adults worldwide. Prior studies have assessed the associations between pornography use frequency and sexual satisfaction, in cisgender and heterosexual individuals, with mixed results. However, measuring pornography use solely by pornography use frequency is problematic, as it can lead to disregarding important contextual factors that may be related to pornography use’s potential effects. Pornography use motivations (PUMs) represent key predictors of sexual behaviors. Yet, their associations with different indicators of sexual wellbeing have yet to be extensively studied. This cross-cultural study examined the links between the eight PUMs most often reported in the general population (i.e. sexual pleasure, sexual curiosity, emotional distraction or suppression, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration) and sexual satisfaction in gender diverse and cisgender individuals. Given the lack of scientific data on associations between individuals’ PUMs and sexual satisfaction, these links were examined in an exploratory manner. A total of 43 countries from five continents were included in the International Sex Survey (ISS). A secure online platform was used to collect self-report, anonymous data from 82,243 participants (39.6% men, 57% women, 3.4% gender diverse individuals; M = 32.4 years, SD = 12.5). Gender-based differences in levels of sexual pleasure, sexual curiosity, emotional distraction, fantasy, stress reduction, boredom avoidance, lack of sexual satisfaction, and self-exploration PUMs were examined using one-way ANOVAs. Then, for each gender group, the associations between each PUM and sexual satisfaction were examined using multiple linear regression, controlling for frequency of masturbation. One-way ANOVAs indicated significant differences between men, women, and gender diverse individuals on all PUMs. For sexual pleasure, sexual curiosity, fantasy, boredom avoidance, lack of sexual satisfaction, emotional distraction, and stress reduction PUMs, men showed the highest scores, followed by gender-diverse individuals, and women. However, for self-exploration, gender-diverse individuals had higher average scores than men. For all PUMs, women’s average scores were the lowest. After controlling for frequency of masturbation, for all genders, sexual pleasure, sexual curiosity and boredom avoidance were significant positive predictors of sexual satisfaction, while lack of sexual satisfaction PUM was a significant negative predictor. Fantasy, stress reduction and self-exploration PUMs were positive significant predictors of sexual satisfaction, and fantasy was a negative significant predictor, but only for women. Findings highlight important gender differences in regards to the main motivations underlying pornography use and their relations to sexual satisfaction. While men and gender diverse individuals show similar motivation profiles, woman report a particularly unique experience, with fantasy, stress reduction and self-exploration being associated to their sexual satisfaction. This work outlines the importance of considering the role of pornography use motivations when studying the links between pornography viewing and sexual well-being, and may provide basis for gender-based considerations when working with individuals seeking help for their pornography use or sexual satisfaction.

Keywords: pornography, sexual satifsaction, cross-cultural, gender diversity

Procedia PDF Downloads 81
181 Development of a Human Skin Explant Model for Drug Metabolism and Toxicity Studies

Authors: K. K. Balavenkatraman, B. Bertschi, K. Bigot, A. Grevot, A. Doelemeyer, S. D. Chibout, A. Wolf, F. Pognan, N. Manevski, O. Kretz, P. Swart, K. Litherland, J. Ashton-Chess, B. Ling, R. Wettstein, D. J. Schaefer

Abstract:

Skin toxicity is poorly detected during preclinical studies, and drug-induced side effects in humans such as rashes, hyperplasia or more serious events like bullous pemphigus or toxic epidermal necrolysis represent an important hurdle for clinical development. In vitro keratinocyte-based epidermal skin models are suitable for the detection of chemical-induced irritancy, but do not recapitulate the biological complexity of full skin and fail to detect potential serious side-effects. Normal healthy skin explants may represent a valuable complementary tool, having the advantage of retaining the full skin architecture and the resident immune cell diversity. This study investigated several conditions for the maintenance of good morphological structure after several days of culture and the retention of phase II metabolism for 24 hours in skin explants in vitro. Human skin samples were collected with informed consent from patients undergoing plastic surgery and immediately transferred and processed in our laboratory by removing the underlying dermal fat. Punch biopsies of 4 mm diameter were cultured in an air-liquid interface using transwell filters. Different cultural conditions such as the effect of calcium, temperature and cultivation media were tested for a period of 14 days and explants were histologically examined after Hematoxylin and Eosin staining. Our results demonstrated that the use of Williams E Medium at 32°C maintained the physiological integrity of the skin for approximately one week. Upon prolonged incubation, the upper layers of the epidermis become thickened and some dead cells are present. Interestingly, these effects were prevented by addition of EGFR inhibitors such as Afatinib or Erlotinib. Phase II metabolism of the skin such as glucuronidation (4-methyl umbeliferone), sulfation (minoxidil), N-acetyltransferase (p-toluidene), catechol methylation (2,3-dehydroxy naphthalene), and glutathione conjugation (chlorodinitro benzene) were analyzed by using LCMS. Our results demonstrated that the human skin explants possess metabolic activity for a period of at least 24 hours for all the substrates tested. A time course for glucuronidation with 4-methyl umbeliferone was performed and a linear correlation was obtained over a period of 24 hours. Longer-term culture studies will indicate the possible evolution of such metabolic activities. In summary, these results demonstrate that human skin explants maintain a normal structure for several days in vitro and are metabolically active for at least the first 24 hours. Hence, with further characterisation, this model may be suitable for the study of drug-induced toxicity.

Keywords: human skin explant, phase II metabolism, epidermal growth factor receptor, toxicity

Procedia PDF Downloads 264
180 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence

Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne

Abstract:

Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.

Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience

Procedia PDF Downloads 132
179 Description of Decision Inconsistency in Intertemporal Choices and Representation of Impatience as a Reflection of Irrationality: Consequences in the Field of Personalized Behavioral Finance

Authors: Roberta Martino, Viviana Ventre

Abstract:

Empirical evidence has, over time, confirmed that the behavior of individuals is inconsistent with the descriptions provided by the Discounted Utility Model, an essential reference for calculating the utility of intertemporal prospects. The model assumes that individuals calculate the utility of intertemporal prospectuses by adding up the values of all outcomes obtained by multiplying the cardinal utility of the outcome by the discount function estimated at the time the outcome is received. The trend of the discount function is crucial for the preferences of the decision maker because it represents the perception of the future, and its trend causes temporally consistent or temporally inconsistent preferences. In particular, because different formulations of the discount function lead to various conclusions in predicting choice, the descriptive ability of models with a hyperbolic trend is greater than linear or exponential models. Suboptimal choices from any time point of view are the consequence of this mechanism, the psychological factors of which are encapsulated in the discount rate trend. In addition, analyzing the decision-making process from a psychological perspective, there is an equivalence between the selection of dominated prospects and a degree of impatience that decreases over time. The first part of the paper describes and investigates the anomalies of the discounted utility model by relating the cognitive distortions of the decision-maker to the emotional factors that are generated during the evaluation and selection of alternatives. Specifically, by studying the degree to which impatience decreases, it’s possible to quantify how the psychological and emotional mechanisms of the decision-maker result in a lack of decision persistence. In addition, this description presents inconsistency as the consequence of an inconsistent attitude towards time-delayed choices. The second part of the paper presents an experimental phase in which we show the relationship between inconsistency and impatience in different contexts. Analysis of the degree to which impatience decreases confirms the influence of the decision maker's emotional impulses for each anomaly in the utility model discussed in the first part of the paper. This work provides an application in the field of personalized behavioral finance. Indeed, the numerous behavioral diversities, evident even in the degrees of decrease in impatience in the experimental phase, support the idea that optimal strategies may not satisfy individuals in the same way. With the aim of homogenizing the categories of investors and to provide a personalized approach to advice, the results proven in the experimental phase are used in a complementary way with the information in the field of behavioral finance to implement the Analytical Hierarchy Process model in intertemporal choices, useful for strategic personalization. In the construction of the Analytic Hierarchy Process, the degree of decrease in impatience is understood as reflecting irrationality in decision-making and is therefore used for the construction of weights between anomalies and behavioral traits.

Keywords: analytic hierarchy process, behavioral finance, financial anomalies, impatience, time inconsistency

Procedia PDF Downloads 44
178 The Effect of Manure Loaded Biochar on Soil Microbial Communities

Authors: T. Weber, D. MacKenzie

Abstract:

The script in this paper describes the use of advanced simulation environment using electronic systems (microcontroller, operational amplifiers, and FPGA). The simulation was used for non-linear dynamic systems behaviour with required observer structure working with parallel real-time simulation based on state-space representation. The proposed deposited model was used for electrodynamic effects including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time and such systems. For further purpose, the spatial temperature distribution may also be used. With upon system, the uncertainties and disturbances may be determined. This provides the estimation of the more precise system states for the required system and additionally the estimation of the ionising disturbances that arise due to radiation effects in space systems. The results have also shown that a system can be developed specifically with the real-time calculation (estimation) of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. TID (Total Ionising Dose) of 1 Gy and Single Effect Transient (SET) free operation up to 50 MeVcm²/mg may assure certain functions. Single-Event Latch-up (SEL) results on the placement of several transistors in the shared substrate of an integrated circuit; ionising radiation can activate an additional parasitic thyristor. This short circuit between semiconductor-elements can destroy the device without protection and measurements. Single-Event Burnout (SEB) on the other hand, increases current between drain and source of a MOSFET and destroys the component in a short time. A Single-Event Gate Rupture (SEGR) can destroy a dielectric of semiconductor also. In order to be able to react to these processes, it must be calculated within a shorter time that ionizing radiation and dose is present. For this purpose, sensors may be used for the realistic evaluation of the diffusion and ionizing effects of the test system. For this purpose, the Peltier element is used for the evaluation of the dynamic temperature increases (dT/dt), from which a measure of the ionization processes and thus radiation will be detected. In addition, the piezo element may be used to record highly dynamic vibrations and oscillations to absorb impacts of charged particle flux. All available sensors shall be used to calibrate the spatial distributions also. By measured value of size and known location of the sensors, the entire distribution in space can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms.

Keywords: cattle, biochar, manure, microbial activity

Procedia PDF Downloads 80
177 Academic Achievement in Argentinean College Students: Major Findings in Psychological Assessment

Authors: F. Uriel, M. M. Fernandez Liporace

Abstract:

In the last decade, academic achievement in higher education has become a topic of agenda in Argentina, regarding the high figures of adjustment problems, academic failure and dropout, and the low graduation rates in the context of massive classes and traditional teaching methods. Psychological variables, such as perceived social support, academic motivation and learning styles and strategies have much to offer since their measurement by tests allows a proper diagnose of their influence on academic achievement. Framed in a major research, several studies analysed multiple samples, totalizing 5135 students attending Argentinean public universities. The first goal was aimed at the identification of statistically significant differences in psychological variables -perceived social support, learning styles, learning strategies, and academic motivation- by age, gender, and degree of academic advance (freshmen versus sophomores). Thus, an inferential group differences study for each psychological dependent variable was developed by means of student’s T tests, given the features of data distribution. The second goal, aimed at examining associations between the four psychological variables on the one hand, and academic achievement on the other, was responded by correlational studies, calculating Pearson’s coefficients, employing grades as the quantitative indicator of academic achievement. The positive and significant results that were obtained led to the formulation of different predictive models of academic achievement which had to be tested in terms of adjustment and predictive power. These models took the four psychological variables above mentioned as predictors, using regression equations, examining predictors individually, in groups of two, and together, analysing indirect effects as well, and adding the degree of academic advance and gender, which had shown their importance within the first goal’s findings. The most relevant results were: first, gender showed no influence on any dependent variable. Second, only good achievers perceived high social support from teachers, and male students were prone to perceive less social support. Third, freshmen exhibited a pragmatic learning style, preferring unstructured environments, the use of examples and simultaneous-visual processing in learning, whereas sophomores manifest an assimilative learning style, choosing sequential and analytic processing modes. Despite these features, freshmen have to deal with abstract contents and sophomores, with practical learning situations due to study programs in force. Fifth, no differences in academic motivation were found between freshmen and sophomores. However, the latter employ a higher number of more efficient learning strategies. Sixth, freshmen low achievers lack intrinsic motivation. Seventh, models testing showed that social support, learning styles and academic motivation influence learning strategies, which affect academic achievement in freshmen, particularly males; only learning styles influence achievement in sophomores of both genders with direct effects. These findings led to conclude that educational psychologists, education specialists, teachers, and universities must plan urgent and major changes. These must be applied in renewed and better study programs, syllabi and classes, as well as tutoring and training systems. Such developments should be targeted to the support and empowerment of students in their academic pathways, and therefore to the upgrade of learning quality, especially in the case of freshmen, male freshmen, and low achievers.

Keywords: academic achievement, academic motivation, coping, learning strategies, learning styles, perceived social support

Procedia PDF Downloads 100
176 Kinematic of Thrusts and Tectonic Vergence in the Paleogene Orogen of Eastern Iran, Sechangi Area

Authors: Shahriyar Keshtgar, Mahmoud Reza Heyhat, Sasan Bagheri, Ebrahim Gholami, Seyed Naser Raiisosadat

Abstract:

The eastern Iranian range is a Z-shaped sigmoidal outcrop appearing with a NS-trending general strike on the satellite images, has already been known as the Sistan suture zone, recently identified as the product of an orogenic event introduced either by the Paleogene or Sistan orogen names. The flysch sedimentary basin of eastern Iran was filled by a huge volume of fine-grained Eocene turbiditic sediments, smaller amounts of pelagic deposits and Cretaceous ophiolitic slices, which are entirely remnants of older accretionary prisms appeared in a fold-thrust belt developed onto a subduction zone under the Lut/Afghan block, portions of the Cimmerian superterrane. In these ranges, there are Triassic sedimentary and carbonate sequences (equivalent to Nayband and Shotori Formations) along with scattered outcrops of Permian limestones (equivalent to Jamal limestone) and greenschist-facies metamorphic rocks, probably belonging to the basement of the Lut block, which have tectonic contacts with younger rocks. Moreover, the younger Eocene detrital-volcanic rocks were also thrusted onto the Cretaceous or younger turbiditic deposits. The first generation folds (parallel folds) and thrusts with slaty cleavage appeared parallel to the NE edge of the Lut block. Structural analysis shows that the most vergence of thrusts is toward the southeast so that the Permo-Triassic units in Lut have been thrusted on the younger rocks, including older (probably Jurassic) granites. Additional structural studies show that the regional transport direction in this deformation event is from northwest to the southeast where, from the outside to the inside of the orogen in the Sechengi area. Younger thrusts of the second deformation event were either directly formed as a result of the second deformation event, or they were older thrusts that reactivated and folded so that often, two sets or more slickenlines can be recognized on the thrust planes. The recent thrusts have been redistributed in directions nearly perpendicular to the edge of the Lut block and parallel to the axial surfaces of the northwest second generation large-scale folds (radial folds). Some of these younger thrusts follow the out-of-the-syncline thrust system. The both axial planes of these folds and associated penetrative shear cleavage extended towards northwest appeared with both northeast and southwest dips parallel to the younger thrusts. The large-scale buckling with the layer-parallel stress field has created this deformation event. Such consecutive deformation events perpendicular to each other cannot be basically explained by the simple linear orogen models presented for eastern Iran so far and are more consistent with the oroclinal buckling model.

Keywords: thrust, tectonic vergence, orocline buckling, sechangi, eastern iranian ranges

Procedia PDF Downloads 55
175 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization

Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon

Abstract:

The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.

Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization

Procedia PDF Downloads 421
174 Influence of Surface Fault Rupture on Dynamic Behavior of Cantilever Retaining Wall: A Numerical Study

Authors: Partha Sarathi Nayek, Abhiparna Dasgupta, Maheshreddy Gade

Abstract:

Earth retaining structure plays a vital role in stabilizing unstable road cuts and slopes in the mountainous region. The retaining structures located in seismically active regions like the Himalayas may experience moderate to severe earthquakes. An earthquake produces two kinds of ground motion: permanent quasi-static displacement (fault rapture) on the fault rupture plane and transient vibration, traveling a long distance. There has been extensive research work to understand the dynamic behavior of retaining structures subjected to transient ground motions. However, understanding the effect caused by fault rapture phenomena on retaining structures is limited. The presence of shallow crustal active faults and natural slopes in the Himalayan region further highlights the need to study the response of retaining structures subjected to fault rupture phenomena. In this paper, an attempt has been made to understand the dynamic response of the cantilever retaining wall subjected to surface fault rupture. For this purpose, a 2D finite element model consists of a retaining wall, backfill and foundation have been developed using Abaqus 6.14 software. The backfill and foundation material are modeled as per the Mohr-Coulomb failure criterion, and the wall is modeled as linear elastic. In this present study, the interaction between backfill and wall is modeled as ‘surface-surface contact.’ The entire simulation process is divided into three steps, i.e., the initial step, gravity load step, fault rupture step. The interaction property between wall and soil and fixed boundary condition to all the boundary elements are applied in the initial step. In the next step, gravity load is applied, and the boundary elements are allowed to move in the vertical direction to incorporate the settlement of soil due to the gravity load. In the final step, surface fault rupture has been applied to the wall-backfill system. For this purpose, the foundation is divided into two blocks, namely, the hanging wall block and the footwall block. A finite fault rupture displacement is applied to the hanging wall part while the footwall bottom boundary is kept as fixed. Initially, a numerical analysis is performed considering the reverse fault mechanism with a dip angle of 45°. The simulated result is presented in terms of contour maps of permanent displacements of the wall-backfill system. These maps highlighted that surface fault rupture can induce permanent displacement in both horizontal and vertical directions, which can significantly influence the dynamic behavior of the wall-backfill system. Further, the influence of fault mechanism, dip angle, and surface fault rupture position is also investigated in this work.

Keywords: surface fault rupture, retaining wall, dynamic response, finite element analysis

Procedia PDF Downloads 86
173 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method

Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili

Abstract:

The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.

Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method

Procedia PDF Downloads 176
172 Viscoelastic Behavior of Human Bone Tissue under Nanoindentation Tests

Authors: Anna Makuch, Grzegorz Kokot, Konstanty Skalski, Jakub Banczorowski

Abstract:

Cancellous bone is a porous composite of a hierarchical structure and anisotropic properties. The biological tissue is considered to be a viscoelastic material, but many studies based on a nanoindentation method have focused on their elasticity and microhardness. However, the response of many organic materials depends not only on the load magnitude, but also on its duration and time course. Depth Sensing Indentation (DSI) technique has been used for examination of creep in polymers, metals and composites. In the indentation tests on biological samples, the mechanical properties are most frequently determined for animal tissues (of an ox, a monkey, a pig, a rat, a mouse, a bovine). However, there are rare reports of studies of the bone viscoelastic properties on microstructural level. Various rheological models were used to describe the viscoelastic behaviours of bone, identified in the indentation process (e. g Burgers model, linear model, two-dashpot Kelvin model, Maxwell-Voigt model). The goal of the study was to determine the influence of creep effect on the mechanical properties of human cancellous bone in indentation tests. The aim of this research was also the assessment of the material properties of bone structures, having in mind the energy aspects of the curve (penetrator loading-depth) obtained in the loading/unloading cycle. There was considered how the different holding times affected the results within trabecular bone.As a result, indentation creep (CIT), hardness (HM, HIT, HV) and elasticity are obtained. Human trabecular bone samples (n=21; mean age 63±15yrs) from the femoral heads replaced during hip alloplasty were removed and drained from alcohol of 1h before the experiment. The indentation process was conducted using CSM Microhardness Tester equipped with Vickers indenter. Each sample was indented 35 times (7 times for 5 different hold times: t1=0.1s, t2=1s, t3=10s, t4=100s and t5=1000s). The indenter was advanced at a rate of 10mN/s to 500mN. There was used Oliver-Pharr method in calculation process. The increase of hold time is associated with the decrease of hardness parameters (HIT(t1)=418±34 MPa, HIT(t2)=390±50 MPa, HIT(t3)= 313±54 MPa, HIT(t4)=305±54 MPa, HIT(t5)=276±90 MPa) and elasticity (EIT(t1)=7.7±1.2 GPa, EIT(t2)=8.0±1.5 GPa, EIT(t3)=7.0±0.9 GPa, EIT(t4)=7.2±0.9 GPa, EIT(t5)=6.2±1.8 GPa) as well as with the increase of the elastic (Welastic(t1)=4.11∙10-7±4.2∙10-8Nm, Welastic(t2)= 4.12∙10-7±6.4∙10-8 Nm, Welastic(t3)=4.71∙10-7±6.0∙10-9 Nm, Welastic(t4)= 4.33∙10-7±5.5∙10-9Nm, Welastic(t5)=5.11∙10-7±7.4∙10-8Nm) and inelastic (Winelastic(t1)=1.05∙10-6±1.2∙10-7 Nm, Winelastic(t2) =1.07∙10-6±7.6∙10-8 Nm, Winelastic(t3)=1.26∙10-6±1.9∙10-7Nm, Winelastic(t4)=1.56∙10-6± 1.9∙10-7 Nm, Winelastic(t5)=1.67∙10-6±2.6∙10-7)) reaction of materials. The indentation creep increased logarithmically (R2=0.901) with increasing hold time: CIT(t1) = 0.08±0.01%, CIT(t2) = 0.7±0.1%, CIT(t3) = 3.7±0.3%, CIT(t4) = 12.2±1.5%, CIT(t5) = 13.5±3.8%. The pronounced impact of creep effect on the mechanical properties of human cancellous bone was observed in experimental studies. While the description elastic-inelastic, and thus the Oliver-Pharr method for data analysis, may apply in few limited cases, most biological tissues do not exhibit elastic-inelastic indentation responses. Viscoelastic properties of tissues may play a significant role in remodelling. The aspect is still under an analysis and numerical simulations. Acknowledgements: The presented results are part of the research project founded by National Science Centre (NCN), Poland, no.2014/15/B/ST7/03244.

Keywords: bone, creep, indentation, mechanical properties

Procedia PDF Downloads 147
171 Ruta graveolens Fingerprints Obtained with Reversed-Phase Gradient Thin-Layer Chromatography with Controlled Solvent Velocity

Authors: Adrian Szczyrba, Aneta Halka-Grysinska, Tomasz Baj, Tadeusz H. Dzido

Abstract:

Since prehistory, plants were constituted as an essential source of biologically active substances in folk medicine. One of the examples of medicinal plants is Ruta graveolens L. For a long time, Ruta g. herb has been famous for its spasmolytic, diuretic, or anti-inflammatory therapeutic effects. The wide spectrum of secondary metabolites produced by Ruta g. includes flavonoids (eg. rutin, quercetin), coumarins (eg. bergapten, umbelliferone) phenolic acids (eg. rosmarinic acid, chlorogenic acid), and limonoids. Unfortunately, the presence of produced substances is highly dependent on environmental factors like temperature, humidity, or soil acidity; therefore standardization is necessary. There were many attempts of characterization of various phytochemical groups (eg. coumarins) of Ruta graveolens using the normal – phase thin-layer chromatography (TLC). However, due to the so-called general elution problem, usually, some components remained unseparated near the start or finish line. Therefore Ruta graveolens is a very good model plant. Methanol and petroleum ether extract from its aerial parts were used to demonstrate the capabilities of the new device for gradient thin-layer chromatogram development. The development of gradient thin-layer chromatograms in the reversed-phase system in conventional horizontal chambers can be disrupted by problems associated with an excessive flux of the mobile phase to the surface of the adsorbent layer. This phenomenon is most likely caused by significant differences between the surface tension of the subsequent fractions of the mobile phase. An excessive flux of the mobile phase onto the surface of the adsorbent layer distorts the flow of the mobile phase. The described effect produces unreliable, and unrepeatable results, causing blurring and deformation of the substance zones. In the prototype device, the mobile phase solution is delivered onto the surface of the adsorbent layer with controlled velocity (by moving pipette driven by 3D machine). The delivery of the solvent to the adsorbent layer is equal to or lower than that of conventional development. Therefore chromatograms can be developed with optimal linear mobile phase velocity. Furthermore, under such conditions, there is no excess of eluent solution on the surface of the adsorbent layer so the higher performance of the chromatographic system can be obtained. Directly feeding the adsorbent layer with eluent also enables to perform convenient continuous gradient elution practically without the so-called gradient delay. In the study, unique fingerprints of methanol and petroleum ether extracts of Ruta graveolens aerial parts were obtained with stepwise gradient reversed-phase thin-layer chromatography. Obtained fingerprints under different chromatographic conditions will be compared. The advantages and disadvantages of the proposed approach to chromatogram development with controlled solvent velocity will be discussed.

Keywords: fingerprints, gradient thin-layer chromatography, reversed-phase TLC, Ruta graveolens

Procedia PDF Downloads 264
170 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 93
169 Accuracy of Fitbit Charge 4 for Measuring Heart Rate in Parkinson’s Patients During Intense Exercise

Authors: Giulia Colonna, Jocelyn Hoye, Bart de Laat, Gelsina Stanley, Jose Key, Alaaddin Ibrahimy, Sule Tinaz, Evan D. Morris

Abstract:

Parkinson’s disease (PD) is the second most common neurodegenerative disease and affects approximately 1% of the world’s population. Increasing evidence suggests that aerobic physical exercise can be beneficial in mitigating both motor and non-motor symptoms of the disease. In a recent pilot study of the role of exercise on PD, we sought to confirm exercise intensity by monitoring heart rate (HR). For this purpose, we asked participants to wear a chest strap heart rate monitor (Polar Electro Oy, Kempele). The device sometimes proved uncomfortable. Looking forward to larger clinical trials, it would be convenient to employ a more comfortable and user friendly device. The Fitbit Charge 4 (Fitbit Inc) is a potentially comfortable, user-friendly solution since it is a wrist-worn heart rate monitor. Polar H10 has been used in large trials, and for our purposes, we treated it as the gold standard for the beat-to-beat period (R-R interval) assessment. In previous literature, it has been shown that Fitbit Charge 4 has comparable accuracy to Polar H10 in healthy subjects. It has yet to be determined if the Fitbit is as accurate as the Polar H10 in subjects with PD or in clinical populations, generally. Goal: To compare the Fitbit Charge 4 to the Polar H10 for monitoring HR in PD subjects engaging in an intensive exercise program. Methods: A total of 596 exercise sessions from 11 subjects (6 males) were collected simultaneously by both devices. Subjects with early-stage PD (Hoehn & Yahr <=2) were enrolled in a 6 months exercise training program designed for PD patients. Subjects participated in 3 one-hour exercise sessions per week. They wore both Fitbit and Polar H10 during each session. Sessions included rest, warm-up, intensive exercise, and cool-down periods. We calculated the bias in the HR via Fitbit under rest (5min) and intensive exercise (20min) by comparing the mean HR during each of the periods to the respective means measured by the Polar (HRFitbit – HRPolar). We also measured the sensitivity and specificity of Fitbit for detecting HRs that exceed the threshold for intensive exercise, defined as 70% of an individual’s theoretical maximum HR. Different types of correlation between the two devices were investigated. Results: The mean bias was 1.68 bpm at rest and 6.29 bpm during high intensity exercise, with an overestimation by Fitbit in both conditions. The mean bias of Fitbit across both rest and intensive exercise periods was 3.98 bpm. The sensitivity of the device in identifying high intensity exercise sessions was 97.14 %. The correlation between the two devices was non-linear, suggesting a saturation tendency of Fitbit to saturate at high values of HR. Conclusion: The performance of Fitbit Charge 4 is comparable to Polar H10 for assessing exercise intensity in a cohort of PD subjects. The device should be considered a reasonable replacement for the more cumbersome chest strap technology in future similar studies of clinical populations.

Keywords: fitbit, heart rate measurements, parkinson’s disease, wrist-wearable devices

Procedia PDF Downloads 73