Search results for: sensitivity study.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13200

Search results for: sensitivity study.

12720 Intercultural Mediation Training and the Training Process of Common Sense Leaders by the Leadership of Universities Communication and Artistic Campaigns

Authors: Bilgehan Gültekin, Tuba Gültekin

Abstract:

It is quite essential to form dialogue mechanisms and dialogue channels to solve intercultural communication issues. Therefore, every country should develop a intercultural education project which aims to resolve international communication issues. For proper mediation training, the first step is to reach an agreement on the actors to run the project. The strongest mediation mechanisms in the world should be analyzed and initiated within the educational policies. A communication-based mediation model should be developed for international mediation training. Mediators can use their convincing communication skills as a part of this model. At the first, fundamental stages of the mediation training should be specified within the scope of the model. Another important topic at this point is common sence and peace leaders to act as an ombudsman in this process. Especially for solving some social issues and conflicts, common sense leaders acting as an ombudsman would lead to effective communication. In mediation training that is run by universities and non-governmental organizations, another phase is to focus on conducting the meetings. In intercultural mediation training, one of the most critical topics is to conduct the meeting traffic and performing a shuttle diplomacy. Meeting traffic is where the mediator organizes meetings with the parties with initiative powers, in order to contribute to the solution of the issue, and schedule these meetings. In this notice titled “ Intercultural mediation training and the training process of common sense leaders by the leadership of universities communication and artistic campaigns" , communication models and strategies about this topic will be constructed and an intercultural art activities and perspectives will be presented.

Keywords: Intercultural communication, mediation education, common sense leaders, artistic sensitivity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
12719 Quality of Life Assessment across the Cancer Continuum: Understanding the Role of an Exercise Rehabilitation Programme

Authors: Bernat-Carles Serdà Ferrer, Arantza Del Valle Gómez

Abstract:

The Quality of Life (QoL) paradigm is multidimensional, dynamic and modular and its definition differs across the cancer continuum. The challenge in the interpretation of QoL data in clinical research is that QoL is influenced by psychological phenomena such as adaptation to illness. This research aims to obtain a valid and sensitive assessment of QoL change over the continuum disease, and to evaluate a rehabilitation programme aimed at inverting the observed decrease in QoL when patients return to daily living activities. The sample comprised 66 men. Patients were first assessed to establish a baseline (P1-diagnosis). This was followed by a post-test (P2-discharge) and a then-test measurement (P3-retrospective evaluation) and after returning home patients were randomized in experimental and control groups. The experimental group attended a rehabilitation programme over 24 weeks (P4). Results show that from baseline to post-test, QoL decreased significantly. The recalibration then-test confirmed a low QoL in all periods evaluated. Significant differences between the experimental and control groups prove the positive effect of the Exercise Rehabilitation Programme (ERP) on QoL. Understanding the real dynamic of QoL over time would help to adapt rehabilitation programmes by improving sensitivity and efficacy and provide professionals with a more accurate perception of the impact of treatment and side effects on patients’ QoL. Our results underline the importance of changing the approach adopted by health professionals towards one of watchful waiting on patients’ QoL until their complete recovery in daily life.

Keywords: Prostate cancer, quality of life, rehabilitation programme, response shift.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1145
12718 Long-term Monitor of Seawater by using TiO2:Ru Sensing Electrode for Hard Clam Cultivation

Authors: Jung-Chuan Chou, Cheng-Wei Chen

Abstract:

The hard clam (meretrix lusoria) cultivated industry has been developed vigorously for recent years in Taiwan, and seawater quality determines the cultivated environment. The pH concentration variation affects survival rate of meretrix lusoria immediately. In order to monitor seawater quality, solid-state sensing electrode of ruthenium-doped titanium dioxide (TiO2:Ru) is developed to measure hydrogen ion concentration in different cultivated solutions. Because the TiO2:Ru sensing electrode has high chemical stability and superior sensing characteristics, thus it is applied as a pH sensor. Response voltages of TiO2:Ru sensing electrode are readout by instrument amplifier in different sample solutions. Mean sensitivity and linearity of TiO2:Ru sensing electrode are 55.20 mV/pH and 0.999 from pH1 to pH13, respectively. We expect that the TiO2:Ru sensing electrode can be applied to real environment measurement, therefore we collect two sample solutions by different meretrix lusoria cultivated ponds in the Yunlin, Taiwan. The two sample solutions are both measured for 200 seconds after calibration of standard pH buffer solutions (pH7, pH8 and pH 9). Mean response voltages of sample 1 and sample 2 are -178.758 mV (Standard deviation=0.427 mV) and -180.206 mV (Standard deviation =0.399 mV), respectively. Response voltages of the two sample solutions are between pH 8 and pH 9 which conform to weak alkali range and suitable meretrix lusoria growth. For long-term monitoring, drift of cultivated solutions (sample 1 and sample 2) are 1.16 mV/hour and 1.03 mV/hour, respectively.

Keywords: Co-sputtering system, Hard clam (meretrix lusoria), Ruthenium-doped titanium dioxide, Solid-state sensing electrode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
12717 Performance of the Aptima® HIV-1 Quant Dx Assay on the Panther System

Authors: Siobhan O’Shea, Sangeetha Vijaysri Nair, Hee Cheol Kim, Charles Thomas Nugent, Cheuk Yan William Tong, Sam Douthwaite, Andrew Worlock

Abstract:

The Aptima® HIV-1 Quant Dx Assay is a fully automated assay on the Panther system. It is based on Transcription- Mediated Amplification and real time detection technologies. This assay is intended for monitoring HIV-1 viral load in plasma specimens and for the detection of HIV-1 in plasma and serum specimens. Nine-hundred and seventy nine specimens selected at random from routine testing at St Thomas’ Hospital, London were anonymised and used to compare the performance of the Aptima HIV-1 Quant Dx assay and Roche COBAS® AmpliPrep/COBAS® TaqMan® HIV-1 Test, v2.0. Two-hundred and thirty four specimens gave quantitative HIV-1 viral load results in both assays. The quantitative results reported by the Aptima Assay were comparable to those reported by the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 Test, v2.0 with a linear regression slope of 1.04 and an intercept on -0.097. The Aptima assay detected HIV-1 in more samples than the COBAS assay. This was not due to lack of specificity of the Aptima assay because this assay gave 99.83% specificity on testing plasma specimens from 600 HIV-1 negative individuals. To understand the reason for this higher detection rate a side-by-side comparison of low level panels made from the HIV-1 3rd international standard (NIBSC10/152) and clinical samples of various subtypes were tested in both assays. The Aptima assay was more sensitive than the COBAS assay. The good sensitivity, specificity and agreement with other commercial assays make the HIV-1 Quant Dx Assay appropriate for both viral load monitoring and detection of HIV-1 infections.

Keywords: HIV viral load, Aptima, Roche, Panther system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3219
12716 Simulation and Optimization of Mechanisms made of Micro-molded Components

Authors: Albert Albers, Pablo Enrique Leslabay

Abstract:

The Institute of Product Development is dealing with the development, design and dimensioning of micro components and systems as a member of the Collaborative Research Centre 499 “Design, Production and Quality Assurance of Molded micro components made of Metallic and Ceramic Materials". Because of technological restrictions in the miniaturization of conventional manufacturing techniques, shape and material deviations cannot be scaled down in the same proportion as the micro parts, rendering components with relatively wide tolerance fields. Systems that include such components should be designed with this particularity in mind, often requiring large clearance. On the end, the output of such systems results variable and prone to dynamical instability. To save production time and resources, every study of these effects should happen early in the product development process and base on computer simulation to avoid costly prototypes. A suitable method is proposed here and exemplary applied to a micro technology demonstrator developed by the CRC499. It consists of a one stage planetary gear train in a sun-planet-ring configuration, with input through the sun gear and output through the carrier. The simulation procedure relies on ordinary Multi Body Simulation methods and subsequently adds other techniques to further investigate details of the system-s behavior and to predict its response. The selection of the relevant parameters and output functions followed the engineering standards for regular sized gear trains. The first step is to quantify the variability and to reveal the most critical points of the system, performed through a whole-mechanism Sensitivity Analysis. Due to the lack of previous knowledge about the system-s behavior, different DOE methods involving small and large amount of experiments were selected to perform the SA. In this particular case the parameter space can be divided into two well defined groups, one of them containing the gear-s profile information and the other the components- spatial location. This has been exploited to explore the different DOE techniques more promptly. A reduced set of parameters is derived for further investigation and to feed the final optimization process, whether as optimization parameters or as external perturbation collective. The 10 most relevant perturbation factors and 4 to 6 prospective variable parameters are considered in a new, simplified model. All of the parameters are affected by the mentioned production variability. The objective functions of interest are based on scalar output-s variability measures, so the problem becomes an optimization under robustness and reliability constrains. The study shows an initial step on the development path of a method to design and optimize complex micro mechanisms composed of wide tolerated elements accounting for the robustness and reliability of the systems- output.

Keywords: Micro molded components, Optimization, Robustness und Reliability, Simulation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1518
12715 Aircraft Selection Using Multiple Criteria Decision Making Analysis Method with Different Data Normalization Techniques

Authors: C. Ardil

Abstract:

This paper presents an original application of multiple criteria decision making analysis theory to the evaluation of aircraft selection problem. The selection of an optimal, efficient and reliable fleet, network and operations planning policy is one of the most important factors in aircraft selection problem. Given that decision making in aircraft selection involves the consideration of a number of opposite criteria and possible solutions, such a selection can be considered as a multiple criteria decision making analysis problem. This study presents a new integrated approach to decision making by considering the multiple criteria utility theory and the maximal regret minimization theory methods as well as aircraft technical, economical, and environmental aspects. Multiple criteria decision making analysis method uses different normalization techniques to allow criteria to be aggregated with qualitative and quantitative data of the decision problem. Therefore, selecting a suitable normalization technique for the model is also a challenge to provide data aggregation for the aircraft selection problem. To compare the impact of different normalization techniques on the decision problem, the vector, linear (sum), linear (max), and linear (max-min) data normalization techniques were identified to evaluate aircraft selection problem. As a logical implication of the proposed approach, it enhances the decision making process through enabling the decision maker to: (i) use higher level knowledge regarding the selection of criteria weights and the proposed technique, (ii) estimate the ranking of an alternative, under different data normalization techniques and integrated criteria weights after a posteriori analysis of the final rankings of alternatives. A set of commercial passenger aircraft were considered in order to illustrate the proposed approach. The obtained results of the proposed approach were compared using Spearman's rho tests. An analysis of the final rank stability with respect to the changes in criteria weights was also performed so as to assess the sensitivity of the alternative rankings obtained by the application of different data normalization techniques and the proposed approach.

Keywords: Normalization Techniques, Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 593
12714 Battery Energy Storage System Economic Benefits Assessment on a Network Frequency Control

Authors: Kréhi Serge Agbli, Samuel Portebos, Michaël Salomon

Abstract:

Here a methodology is considered aiming at evaluating the economic benefit of the provision of a primary frequency control unit using a Battery Energy Storage System (BESS). In this methodology, two control types (basic and hysteresis) are implemented and the corresponding minimum energy storage system power allowing to maintain the frequency drop inside a given threshold under a given contingency is identified and compared using DigSilent’s PowerFactory software. Following this step, the corresponding energy storage capacity (in MWh) is calculated. As PowerFactory is dedicated to dynamic simulation for transient analysis, a first order model related to the IEEE 9 bus grid used for the analysis under PowerFactory is characterized and implemented on MATLAB-Simulink. Primary frequency control is simulated using the two control types over one-month grid's frequency deviation data on this Simulink model. This simulation results in the energy throughput both basic and hysteresis BESSs. It emerges that the 15 minutes operation band of the battery capacity allocated to frequency control is sufficient under the considered disturbances. A sensitivity analysis on the width of the control deadband is then performed for the two control types. The deadband width variation leads to an identical sizing with the hysteresis control showing a better frequency control at the cost of a higher delivered throughput compared to the basic control. An economic analysis comparing the cost of the sized BESS to the potential revenues is then performed.

Keywords: Battery Energy Storage System, electrical network frequency stability, frequency control unit, PowerFactory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 808
12713 Influence of Single and Multiple Skin-Core Debonding on Free Vibration Characteristics of Innovative GFRP Sandwich Panels

Authors: Indunil Jayatilake, Warna Karunasena, Weena Lokuge

Abstract:

An Australian manufacturer has fabricated an innovative GFRP sandwich panel made from E-glass fiber skin and a modified phenolic core for structural applications. Debonding, which refers to separation of skin from the core material in composite sandwiches, is one of the most common types of damage in composites. The presence of debonding is of great concern because it not only severely affects the stiffness but also modifies the dynamic behaviour of the structure. Generally it is seen that the majority of research carried out has been concerned about the delamination of laminated structures whereas skin-core debonding has received relatively minor attention. Furthermore it is observed that research done on composite slabs having multiple skin-core debonding is very limited. To address this gap, a comprehensive research investigating dynamic behaviour of composite panels with single and multiple debonding is presented. The study uses finite-element modelling and analyses for investigating the influence of debonding on free vibration behaviour of single and multilayer composite sandwich panels. A broad parametric investigation has been carried out by varying debonding locations, debonding sizes and support conditions of the panels in view of both single and multiple debonding. Numerical models were developed with Strand7 finite element package by innovatively selecting the suitable elements to diligently represent their actual behavior. Three-dimensional finite element models were employed to simulate the physically real situation as close as possible, with the use of an experimentally and numerically validated finite element model. Comparative results and conclusions based on the analyses are presented. For similar extents and locations of debonding, the effect of debonding on natural frequencies appears greatly dependent on the end conditions of the panel, giving greater decrease in natural frequency when the panels are more restrained. Some modes are more sensitive to debonding and this sensitivity seems to be related to their vibration mode shapes. The fundamental mode seems generally the least sensitive mode to debonding with respect to the variation in free vibration characteristics. The results indicate the effectiveness of the developed three dimensional finite element models in assessing debonding damage in composite sandwich panels.

Keywords: Debonding, free vibration behaviour, GFRP sandwich panels, three dimensional finite element modelling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2017
12712 The Microstructural and Mechanical Characterization of Organo-Clay-Modified Bitumen, Calcareous Aggregate, and Organo-Clay Blends

Authors: A. Gürses, T. B. Barın, Ç. Doğar

Abstract:

Bitumen has been widely used as the binder of aggregate in road pavement due to its good viscoelastic properties, as a viscous organic mixture with various chemical compositions. Bitumen is a liquid at high temperature and it becomes brittle at low temperatures, and this temperature-sensitivity can cause the rutting and cracking of the pavement and limit its application. Therefore, the properties of existing asphalt materials need to be enhanced. The pavement with polymer modified bitumen exhibits greater resistance to rutting and thermal cracking, decreased fatigue damage, as well as stripping and temperature susceptibility; however, they are expensive and their applications have disadvantages. Bituminous mixtures are composed of very irregular aggregates bound together with hydrocarbon-based asphalt, with a low volume fraction of voids dispersed within the matrix. Montmorillonite (MMT) is a layered silicate with low cost and abundance, which consists of layers of tetrahedral silicate and octahedral hydroxide sheets. Recently, the layered silicates have been widely used for the modification of polymers, as well as in many different fields. However, there are not too much studies related with the preparation of the modified asphalt with MMT, currently. In this study, organo-clay-modified bitumen, and calcareous aggregate and organo-clay blends were prepared by hot blending method with OMMT, which has been synthesized using a cationic surfactant (Cetyltrymethylammonium bromide, CTAB) and long chain hydrocarbon, and MMT. When the exchangeable cations in the interlayer region of pristine MMT were exchanged with hydrocarbon attached surfactant ions, the MMT becomes organophilic and more compatible with bitumen. The effects of the super hydrophobic OMMT onto the micro structural and mechanic properties (Marshall Stability and volumetric parameters) of the prepared blends were investigated. Stability and volumetric parameters of the blends prepared were measured using Marshall Test. Also, in order to investigate the morphological and micro structural properties of the organo-clay-modified bitumen and calcareous aggregate and organo-clay blends, their SEM and HRTEM images were taken. It was observed that the stability and volumetric parameters of the prepared mixtures improved significantly compared to the conventional hot mixes and even the stone matrix mixture. A micro structural analysis based on SEM images indicates that the organo-clay platelets dispersed in the bitumen have a dominant role in the increase of effectiveness of bitumen - aggregate interactions.

Keywords: Hot mix asphalt, stone matrix asphalt, organo clay, Marshall Test, calcareous aggregate, modified bitumen.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378
12711 Analysis of Driver Point of Regard Determinations with Eye-Gesture Templates Using Receiver Operating Characteristic

Authors: Siti Nor Hafizah binti Mohd Zaid, Mohamed Abdel-Maguid, Abdel-Hamid Soliman

Abstract:

An Advance Driver Assistance System (ADAS) is a computer system on board a vehicle which is used to reduce the risk of vehicular accidents by monitoring factors relating to the driver, vehicle and environment and taking some action when a risk is identified. Much work has been done on assessing vehicle and environmental state but there is still comparatively little published work that tackles the problem of driver state. Visual attention is one such driver state. In fact, some researchers claim that lack of attention is the main cause of accidents as factors such as fatigue, alcohol or drug use, distraction and speeding all impair the driver-s capacity to pay attention to the vehicle and road conditions [1]. This seems to imply that the main cause of accidents is inappropriate driver behaviour in cases where the driver is not giving full attention while driving. The work presented in this paper proposes an ADAS system which uses an image based template matching algorithm to detect if a driver is failing to observe particular windscreen cells. This is achieved by dividing the windscreen into 24 uniform cells (4 rows of 6 columns) and matching video images of the driver-s left eye with eye-gesture templates drawn from images of the driver looking at the centre of each windscreen cell. The main contribution of this paper is to assess the accuracy of this approach using Receiver Operating Characteristic analysis. The results of our evaluation give a sensitivity value of 84.3% and a specificity value of 85.0% for the eye-gesture template approach indicating that it may be useful for driver point of regard determinations.

Keywords: Advanced Driver Assistance Systems, Eye-Tracking, Hazard Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
12710 Lamb Wave Wireless Communication in Healthy Plates Using Coherent Demodulation

Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad

Abstract:

Guided ultrasonic waves are used in Non-Destructive Testing and Structural Health Monitoring for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average bit error percentage. Results has shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.

Keywords: Lamb Wave Communication, wireless communication, coherent demodulation, bit error percentage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 563
12709 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: Frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1316
12708 Considerations for Effectively Using Probability of Failure as a Means of Slope Design Appraisal for Homogeneous and Heterogeneous Rock Masses

Authors: Neil Bar, Andrew Heweston

Abstract:

Probability of failure (PF) often appears alongside factor of safety (FS) in design acceptance criteria for rock slope, underground excavation and open pit mine designs. However, the design acceptance criteria generally provide no guidance relating to how PF should be calculated for homogeneous and heterogeneous rock masses, or what qualifies a ‘reasonable’ PF assessment for a given slope design. Observational and kinematic methods were widely used in the 1990s until advances in computing permitted the routine use of numerical modelling. In the 2000s and early 2010s, PF in numerical models was generally calculated using the point estimate method. More recently, some limit equilibrium analysis software offer statistical parameter inputs along with Monte-Carlo or Latin-Hypercube sampling methods to automatically calculate PF. Factors including rock type and density, weathering and alteration, intact rock strength, rock mass quality and shear strength, the location and orientation of geologic structure, shear strength of geologic structure and groundwater pore pressure influence the stability of rock slopes. Significant engineering and geological judgment, interpretation and data interpolation is usually applied in determining these factors and amalgamating them into a geotechnical model which can then be analysed. Most factors are estimated ‘approximately’ or with allowances for some variability rather than ‘exactly’. When it comes to numerical modelling, some of these factors are then treated deterministically (i.e. as exact values), while others have probabilistic inputs based on the user’s discretion and understanding of the problem being analysed. This paper discusses the importance of understanding the key aspects of slope design for homogeneous and heterogeneous rock masses and how they can be translated into reasonable PF assessments where the data permits. A case study from a large open pit gold mine in a complex geological setting in Western Australia is presented to illustrate how PF can be calculated using different methods and obtain markedly different results. Ultimately sound engineering judgement and logic is often required to decipher the true meaning and significance (if any) of some PF results.

Keywords: Probability of failure, point estimate method, Monte-Carlo simulations, sensitivity analysis, slope stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199
12707 Composite Coatings of Piezoelectric Quartz Sensors Based on Viscous Sorbents and Casein Micelles

Authors: Anastasiia Shuba, Tatiana Kuchmenko, Umarkhanov Ruslan, Bogdanova Ekaterina

Abstract:

The development of new sensitive coatings for sensors is one of the key directions in the development of sensor technologies. Recently, there has been a trend towards the creation of multicomponent coatings for sensors, which make it possible to increase the sensitivity, and specificity, and improve the performance properties of sensors. When analyzing samples with a complex matrix of biological origin, the inclusion of micelles of bioactive substances (amino and nucleic acids, peptides, proteins) in the composition of the sensor coating can also increase useful analytical information. The purpose of this work is to evaluate the analytical characteristics of composite coatings of piezoelectric quartz sensors based on medium-molecular viscous sorbents with incorporated micellar casein concentrate during the sorption of vapors of volatile organic compounds. The sorption properties of the coatings were studied by piezoelectric quartz microbalance. Macromolecular compounds (dicyclohexyl-18-crown-6, triton X-100, lanolin, micellar casein concentrate) were used as sorbents. Highly volatile organic compounds of various classes (alcohols, acids, aldehydes, esters) and water were selected as test substances. It has been established that composite coatings of sensors with the inclusion of micellar casein are more stable and selective to vapors of highly volatile compounds than to water vapors. The method and technique of forming a composite coating using molecular viscous sorbents does not affect the kinetic features of VOC sorption. When casein micelles are used, the features of kinetic sorption depend on the matrix of the coating.

Keywords: Composite coating, piezoelectric quartz microbalance, sensor, volatile organic compounds.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154
12706 CAGE Questionnaire as a Screening Tool for Hazardous Drinking in an Acute Admissions Ward: Frequency of Application and Comparison with AUDIT-C Questionnaire

Authors: Ammar Ayad Issa Al-Rifaie, Zuhreya Muazu, Maysam Ali Abdulwahid, Dermot Gleeson

Abstract:

The aim of this audit was to examine the efficiency of alcohol history documentation and screening for hazardous drinkers at the Medical Admission Unit (MAU) of Northern General Hospital (NGH), Sheffield, to identify any potential for enhancing clinical practice. Data were collected from medical clerking sheets, ICE system and directly from 82 patients by three junior medical doctors using both CAGE questionnaire and AUDIT-C tool for newly admitted patients to MAU in NGH, in the period between January and March 2015. Alcohol consumption was documented in around two-third of the patient sample and this was documented fairly accurately by health care professionals. Some used subjective words such as 'social drinking' in the alcohol units’ section of the history. CAGE questionnaire was applied to only four patients and none of the patients had documented advice, education or referral to an alcohol liaison team. AUDIT-C tool had identified 30.4%, while CAGE 10.9%, of patients admitted to the NGH MAU as hazardous drinkers. The amount of alcohol the patient consumes positively correlated with the score of AUDIT-C (Pearson correlation 0.83). Re-audit is planned to be carried out after integrating AUDIT-C tool as labels in the notes and presenting a brief teaching session to junior doctors. Alcohol misuse screening is not adequately undertaken and no appropriate action is being offered to hazardous drinkers. CAGE questionnaire is poorly applied to patients and when satisfactory and adequately used has low sensitivity to detect hazardous drinkers in comparison with AUDIT-C tool. Re-audit of alcohol screening practice after introducing AUDIT-C tool in clerking sheets (as labels) is required to compare the findings and conclude the audit cycle.

Keywords: Alcohol screening, AUDIT-C, CAGE, Hazardous drinking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1912
12705 Synthesis of Highly Sensitive Molecular Imprinted Sensor for Selective Determination of Doxycycline in Honey Samples

Authors: Nadia El Alami El Hassani, Soukaina Motia, Benachir Bouchikhi, Nezha El Bari

Abstract:

Doxycycline (DXy) is a cycline antibiotic, most frequently prescribed to treat bacterial infections in veterinary medicine. However, its broad antimicrobial activity and low cost, lead to an intensive use, which can seriously affect human health. Therefore, its spread in the food products has to be monitored. The scope of this work was to synthetize a sensitive and very selective molecularly imprinted polymer (MIP) for DXy detection in honey samples. Firstly, the synthesis of this biosensor was performed by casting a layer of carboxylate polyvinyl chloride (PVC-COOH) on the working surface of a gold screen-printed electrode (Au-SPE) in order to bind covalently the analyte under mild conditions. Secondly, DXy as a template molecule was bounded to the activated carboxylic groups, and the formation of MIP was performed by a biocompatible polymer by the mean of polyacrylamide matrix. Then, DXy was detected by measurements of differential pulse voltammetry (DPV). A non-imprinted polymer (NIP) prepared in the same conditions and without the use of template molecule was also performed. We have noticed that the elaborated biosensor exhibits a high sensitivity and a linear behavior between the regenerated current and the logarithmic concentrations of DXy from 0.1 pg.mL−1 to 1000 pg.mL−1. This technic was successfully applied to determine DXy residues in honey samples with a limit of detection (LOD) of 0.1 pg.mL−1 and an excellent selectivity when compared to the results of oxytetracycline (OXy) as analogous interfering compound. The proposed method is cheap, sensitive, selective, simple, and is applied successfully to detect DXy in honey with the recoveries of 87% and 95%. Considering these advantages, this system provides a further perspective for food quality control in industrial fields.

Keywords: Electrochemical sensor, molecular imprinted polymer, doxycycline, food control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1175
12704 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1805
12703 Nanomaterial Based Electrochemical Sensors for Endocrine Disrupting Compounds

Authors: Gaurav Bhanjana, Ganga Ram Chaudhary, Sandeep Kumar, Neeraj Dilbaghi

Abstract:

Main sources of endocrine disrupting compounds in the ecosystem are hormones, pesticides, phthalates, flame retardants, dioxins, personal-care products, coplanar polychlorinated biphenyls (PCBs), bisphenol A, and parabens. These endocrine disrupting compounds are responsible for learning disabilities, brain development problems, deformations of the body, cancer, reproductive abnormalities in females and decreased sperm count in human males. Although discharge of these chemical compounds into the environment cannot be stopped, yet their amount can be retarded through proper evaluation and detection techniques. The available techniques for determination of these endocrine disrupting compounds mainly include high performance liquid chromatography (HPLC), mass spectroscopy (MS) and gas chromatography-mass spectrometry (GC–MS). These techniques are accurate and reliable but have certain limitations like need of skilled personnel, time consuming, interference and requirement of pretreatment steps. Moreover, these techniques are laboratory bound and sample is required in large amount for analysis. In view of above facts, new methods for detection of endocrine disrupting compounds should be devised that promise high specificity, ultra sensitivity, cost effective, efficient and easy-to-operate procedure. Nowadays, electrochemical sensors/biosensors modified with nanomaterials are gaining high attention among researchers. Bioelement present in this system makes the developed sensors selective towards analyte of interest. Nanomaterials provide large surface area, high electron communication feature, enhanced catalytic activity and possibilities of chemical modifications. In most of the cases, nanomaterials also serve as an electron mediator or electrocatalyst for some analytes.

Keywords: Sensors, endocrine disruptors, nanoparticles, electrochemical, microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1578
12702 Evaluation of the Heating Capability and in vitro Hemolysis of Nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) Ferrites Prepared by Sol-gel Method

Authors: Laura Elena De León Prado, Dora Alicia Cortés Hernández, Javier Sánchez

Abstract:

Among the different cancer treatments that are currently used, hyperthermia has a promising potential due to the multiple benefits that are obtained by this technique. In general terms, hyperthermia is a method that takes advantage of the sensitivity of cancer cells to heat, in order to damage or destroy them. Within the different ways of supplying heat to cancer cells and achieve their destruction or damage, the use of magnetic nanoparticles has attracted attention due to the capability of these particles to generate heat under the influence of an external magnetic field. In addition, these nanoparticles have a high surface area and sizes similar or even lower than biological entities, which allow their approaching and interaction with a specific region of interest. The most used magnetic nanoparticles for hyperthermia treatment are those based on iron oxides, mainly magnetite and maghemite, due to their biocompatibility, good magnetic properties and chemical stability. However, in order to fulfill more efficiently the requirements that demand the treatment of magnetic hyperthermia, there have been investigations using ferrites that incorporate different metallic ions, such as Mg, Mn, Co, Ca, Ni, Cu, Li, Gd, etc., in their structure. This paper reports the synthesis of nanosized MgxMn1-xFe2O4 (x = 0.3 and 0.4) ferrites by sol-gel method and their evaluation in terms of heating capability and in vitro hemolysis to determine the potential use of these nanoparticles as thermoseeds for the treatment of cancer by magnetic hyperthermia. It was possible to obtain ferrites with nanometric sizes, a single crystalline phase with an inverse spinel structure and a behavior near to that of superparamagnetic materials. Additionally, at concentrations of 10 mg of magnetic material per mL of water, it was possible to reach a temperature of approximately 45°C, which is within the range of temperatures used for the treatment of hyperthermia. The results of the in vitro hemolysis assay showed that, at the concentrations tested, these nanoparticles are non-hemolytic, as their percentage of hemolysis is close to zero. Therefore, these materials can be used as thermoseeds for the treatment of cancer by magnetic hyperthermia.

Keywords: Ferrites, heating capability, hemolysis, nanoparticles, sol-gel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 903
12701 From Primer Generation to Chromosome Identification: A Primer Generation Genotyping Method for Bacterial Identification and Typing

Authors: Wisam H. Benamer, Ehab A. Elfallah, Mohamed A. Elshaari, Farag A. Elshaari

Abstract:

A challenge for laboratories is to provide bacterial identification and antibiotic sensitivity results within a short time. Hence, advancement in the required technology is desirable to improve timing, accuracy and quality. Even with the current advances in methods used for both phenotypic and genotypic identification of bacteria the need is there to develop method(s) that enhance the outcome of bacteriology laboratories in accuracy and time. The hypothesis introduced here is based on the assumption that the chromosome of any bacteria contains unique sequences that can be used for its identification and typing. The outcome of a pilot study designed to test this hypothesis is reported in this manuscript. Methods: The complete chromosome sequences of several bacterial species were downloaded to use as search targets for unique sequences. Visual basic and SQL server (2014) were used to generate a complete set of 18-base long primers, a process started with reverse translation of randomly chosen 6 amino acids to limit the number of the generated primers. In addition, the software used to scan the downloaded chromosomes using the generated primers for similarities was designed, and the resulting hits were classified according to the number of similar chromosomal sequences, i.e., unique or otherwise. Results: All primers that had identical/similar sequences in the selected genome sequence(s) were classified according to the number of hits in the chromosomes search. Those that were identical to a single site on a single bacterial chromosome were referred to as unique. On the other hand, most generated primers sequences were identical to multiple sites on a single or multiple chromosomes. Following scanning, the generated primers were classified based on ability to differentiate between medically important bacterial and the initial results looks promising. Conclusion: A simple strategy that started by generating primers was introduced; the primers were used to screen bacterial genomes for match. Primer(s) that were uniquely identical to specific DNA sequence on a specific bacterial chromosome were selected. The identified unique sequence can be used in different molecular diagnostic techniques, possibly to identify bacteria. In addition, a single primer that can identify multiple sites in a single chromosome can be exploited for region or genome identification. Although genomes sequences draft of isolates of organism DNA enable high throughput primer design using alignment strategy, and this enhances diagnostic performance in comparison to traditional molecular assays. In this method the generated primers can be used to identify an organism before the draft sequence is completed. In addition, the generated primers can be used to build a bank for easy access of the primers that can be used to identify bacteria.

Keywords: Bacteria chromosome, bacterial identification, sequence, primer generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1046
12700 Influence of Thermo-fluid-dynamic Parameters on Fluidics in an Expanding Thermal Plasma Deposition Chamber

Authors: G. Zuppardi, F. Romano

Abstract:

Technology of thin film deposition is of interest in many engineering fields, from electronic manufacturing to corrosion protective coating. A typical deposition process, like that developed at the University of Eindhoven, considers the deposition of a thin, amorphous film of C:H or of Si:H on the substrate, using the Expanding Thermal arc Plasma technique. In this paper a computing procedure is proposed to simulate the flow field in a deposition chamber similar to that at the University of Eindhoven and a sensitivity analysis is carried out in terms of: precursor mass flow rate, electrical power, supplied to the torch and fluid-dynamic characteristics of the plasma jet, using different nozzles. To this purpose a deposition chamber similar in shape, dimensions and operating parameters to the above mentioned chamber is considered. Furthermore, a method is proposed for a very preliminary evaluation of the film thickness distribution on the substrate. The computing procedure relies on two codes working in tandem; the output from the first code is the input to the second one. The first code simulates the flow field in the torch, where Argon is ionized according to the Saha-s equation, and in the nozzle. The second code simulates the flow field in the chamber. Due to high rarefaction level, this is a (commercial) Direct Simulation Monte Carlo code. Gas is a mixture of 21 chemical species and 24 chemical reactions from Argon plasma and Acetylene are implemented in both codes. The effects of the above mentioned operating parameters are evaluated and discussed by 2-D maps and profiles of some important thermo-fluid-dynamic parameters, as per Mach number, velocity and temperature. Intensity, position and extension of the shock wave are evaluated and the influence of the above mentioned test conditions on the film thickness and uniformity of distribution are also evaluated.

Keywords: Deposition chamber, Direct Simulation Mote Carlo method (DSMC), Plasma chemistry, Rarefied gas dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1698
12699 Methods for Material and Process Monitoring by Characterization of (Second and Third Order) Elastic Properties with Lamb Waves

Authors: R. Meier, M. Pander

Abstract:

In accordance with the industry 4.0 concept, manufacturing process steps as well as the materials themselves are going to be more and more digitalized within the next years. The “digital twin” representing the simulated and measured dataset of the (semi-finished) product can be used to control and optimize the individual processing steps and help to reduce costs and expenditure of time in product development, manufacturing, and recycling. In the present work, two material characterization methods based on Lamb waves were evaluated and compared. For demonstration purpose, both methods were shown at a standard industrial product - copper ribbons, often used in photovoltaic modules as well as in high-current microelectronic devices. By numerical approximation of the Rayleigh-Lamb dispersion model on measured phase velocities second order elastic constants (Young’s modulus, Poisson’s ratio) were determined. Furthermore, the effective third order elastic constants were evaluated by applying elastic, “non-destructive”, mechanical stress on the samples. In this way, small microstructural variations due to mechanical preconditioning could be detected for the first time. Both methods were compared with respect to precision and inline application capabilities. Microstructure of the samples was systematically varied by mechanical loading and annealing. Changes in the elastic ultrasound transport properties were correlated with results from microstructural analysis and mechanical testing. In summary, monitoring the elastic material properties of plate-like structures using Lamb waves is valuable for inline and non-destructive material characterization and manufacturing process control. Second order elastic constants analysis is robust over wide environmental and sample conditions, whereas the effective third order elastic constants highly increase the sensitivity with respect to small microstructural changes. Both Lamb wave based characterization methods are fitting perfectly into the industry 4.0 concept.

Keywords: Lamb waves, industry 4.0, process control, elasticity, acoustoelasticity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1099
12698 Libretto Thematology in Rossini's Operas and Its Formation by the Composer

Authors: Areti Tziboula, Anna-Maria Rentzeperi-Tsonou

Abstract:

The present study examines the way Gioachino Rossini’s librettos are selected and formed demonstrating the evolutionary trajectory of the composer during his operatic career. Rossini, a dominant figure in the early 19th century Italian opera, is demanding in his choice of librettos and has a preference for subjects inspired by European literature, of his time or earlier. He begins his operatic career with farsae and operas buffae, but he mainly continues with operas seriae, to end it with a grand opera that conforms to the spirit of romanticism as manifested in Paris of his time. His farsae, operas buffae and comic operas in general are representative of the trends of the time: in some the irrational and the exaggeration prevail, in others the upheavals, others are semi-serious and emotional with a happy ending and others are comedies with more realistic characters, but usually the styles are mixed and complement each other. The stories that refer to his modern era unfold mocking human characters, beliefs attitudes and their expressions in every day habits, satirizing current affairs, presenting innovative elements in dramatic intervention and dealing with a variety of social and national issues. Count Ory, his final comic work, consists of a complex witty urban comic opera entwined with romantic sensitivity. The themes he chooses for his operas seriae are characterized by tragic passion, take place in the era of the Trojan War, the Roman Empire, the Middle Ages, and the Age of the Crusades and are set in Italy, England, Poland, Greece, Switzerland, Israel and Egypt. In his early works he sketches the characters remotely, objectively and with static, reflexive emotional expression and a happy ending. Then he continues with operas for the San Carlo Theater, which are characterized by experimentation and innovation to end up his Italian operatic career with the ostensibly backward but in fact tragic Semiramis followed in Paris by William Tell, his ultimate dramatic achievement. There are indirect references to burning issues of his era but the censorship of the time does not allow direct reference to topics that would upset the status quo. In addition, Rossini lives in a temporal period of peace after the Napoleonic Wars and by temperament he resists openly engaging in political strife. Furthermore, the need for survival necessitates the search for the more profitable contracts. In conclusion, Rossini, as a liberal personality, shapes his librettos without interruptions or setbacks, with ideas that come out after a lot of thought and a strong sense of purpose. He moves from the moral and aesthetic clarity of the classic tradition of his early works to a more elaborate and morally ambiguous romantic style in a moderate and hesitant way.

Keywords: Gioachino Rossini, libretto, nineteenth century music, opera.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 388
12697 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin

Abstract:

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Keywords: Day night band, fishery, SAR, South China Sea.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1097
12696 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications

Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso

Abstract:

The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.

Keywords: Interferometry, MIMO RADAR, SAR, tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 915
12695 The Effectiveness of Lesson Study via Learning Communities in Increasing Instructional Self-Efficacy of Beginning Special Educators

Authors: David D. Hampton

Abstract:

Lesson study is used as an instructional technique to promote both student and faculty learning. However, little is known about the usefulness of learning communities in supporting results of lesson study on the self-efficacy and development for tenure-track faculty. This study investigated the impact of participation in a lesson study learning community on 34 new faculty members at a mid-size Midwestern University, specifically regarding implementing lesson study evaluations by new faculty on their reported self-efficacy. Results indicate that participation in a lesson study learning community significantly increased faculty members’ lesson study self-efficacy as well as grant and manuscript production over one academic year. Suggestions for future lesson study around faculty learning communities are discussed.

Keywords: Lesson study, learning community, lesson study self-efficacy, new faculty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 400
12694 Serological IgG Testing to Diagnose Alimentary Induced Diseases and Monitoring Efficacy of an Individual Defined Diet in Dogs

Authors: Anne-Margré C. Vink

Abstract:

Background. Food-related allergies and intolerances are frequently occurring in dogs. Diagnosis and monitoring according ‘Golden Standard’ of elimination efficiency is, however, time consuming, expensive, and requires expert clinical setting. In order to facilitate rapid and robust, quantitative testing of intolerance, and determining the individual offending foods, a serological test is implicated for Alimentary Induced Diseases and manifestations. Method. As we developed Medisynx IgG Human Screening Test ELISA before and the dog’ immune system is most similar to humans, we were able to develop Medisynx IgG Dog Screening Test ELISA as well. In this randomized, double-blind, split-sample, retro perspective study 47 dogs suffering from Canine Atopic Dermatitis (CAD) and several secondary induced reactions were included to participate in serological Medisynx IgG Dog Screening Test ELISA (within < 0,02 % SD). Results were expressed as titers relative to the standard OD readings to diagnose alimentary induced diseases and monitoring efficacy of an individual eliminating diet in dogs. Split sample analysis was performed by independently sending 2 times 3 ml serum under two unique codes. Results. The veterinarian monitored these dogs to check dog’ results at least at 3, 7, 21, 49, 70 days and after period of 6 and 12 months on an individual negative diet and a positive challenge (retrospectively) at 6 months. Data of each dog were recorded in a screening form and reported that a complete recovery of all clinical manifestations was observed at or less than 70 days (between 50 and 70 days) in the majority of dogs (44 out of 47 dogs =93.6%). Conclusion. Challenge results showed a significant result of 100% in specificity as well as 100% positive predicted value. On the other hand, sensitivity was 95,7% and negative predictive value was 95,7%. In conclusion, an individual diet based on IgG ELISA in dogs provides a significant improvement of atopic dermatitis and pruritus including all other non-specific defined allergic skin reactions as erythema, itching, biting and gnawing at toes, as well as to several secondary manifestations like chronic diarrhoea, chronic constipation, otitis media, obesity, laziness or inactive behaviour, pain and muscular stiffness causing a movement disorders, excessive lacrimation, hyper behaviour, nervous behaviour and not possible to stay alone at home, anxiety, biting and aggressive behaviour and disobedience behaviour. Furthermore, we conclude that a relatively more severe systemic candidiasis, as shown by relatively higher titer (class 3 and 4 IgG reactions to Candida albicans), influence the duration of recovery from clinical manifestations in affected dogs. These findings are consistent with our preliminary human clinical studies.

Keywords: Allergy, canine atopic dermatitis (CAD), food allergens, IgG-ELISA, food-incompatibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2919
12693 The Relation of College Students- Process of Study and Creativity: The Mediating Effect of Creative Self-Efficacy

Authors: Chih-Feng Chuang, Shih-Ching Shiu, Chao-Jen Cheng

Abstract:

The purpose of this study was to investigate the relationships among students- process of study, creative self-efficacy and creativity while attending college. A total of 60 students enrolled in Hsiuping Institute of Technology in central Taiwan were selected as samples for the study. The instruments for this study included three questionnaires to explore the aforesaid aspects. This researchers tested creative self-efficacy and process of study, and creativity with Pearson correlation and hierarchical regression analyses. The major findings of this research are (1) the process of study had direct positive predictability on creativity, and (2) the relationship between process of study and creativity is partially mediated by creative self-efficacy.

Keywords: Process of study, Creative self-efficacy, Creativity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1740
12692 Characterization of an Acetobacter Strain Isolated from Iranian Peach that Tolerates High Temperatures and Ethanol Concentrations

Authors: K. Beheshti Maal, R. Shafiee

Abstract:

Vinegar is a precious food additive and complement as well as effective preservative against food spoilage. Recently traditional vinegar production has been improved using various natural substrates and fruits such as grape, palm, cherry, coconut, date, sugarcane, rice and balsam. These neoclassical fermentations resulted in several vinegar types with different tastes, fragrances and nutritional values because of applying various acetic acid bacteria as starters. Acetic acid bacteria include genera Acetobacter, Gluconacetobacter and Gluconobacter according to latest edition of Bergy-s Manual of Systematic Bacteriology that classifies genera on the basis of their 16s RNA differences. Acetobacter spp as the main vinegar starters belong to family Acetobacteraceae that are gram negative obligate aerobes, chemoorganotrophic bacilli that are oxidase negative and oxidize ethanol to acetic acid. In this research we isolated and identified a native Acetobacter strain with high acetic acid productivity and tolerance against high ethanol concentrations from Iranian peach as a summer delicious fruit that is very susceptible to food spoilage and decay. We used selective and specific laboratorial culture media such as Standard GYC, Frateur and Carr medium. Also we used a new industrial culture medium and a miniature fermentor with a new aeration system innovated by Pars Yeema Biotechnologists Co., Isfahan Science and Technology Town (ISTT), Isfahan, Iran. The isolated strain was successfully cultivated in modified Carr media with 2.5% and 5% ethanol simultaneously in high temperatures, 34 - 40º C after 96 hours of incubation period. We showed that the increase of ethanol concentration resulted in rising of strain sensitivity to high temperature. In conclusion we isolated and characterized a new Acetobacter strain from Iranian peach that could be considered as a potential strain for production of a new vinegar type, peach vinegar, with a delicious taste and advantageous nutritional value in food biotechnology and industrial microbiology.

Keywords: Acetobacter, Acetic Acid Bacteria, Vinegar, Peach, Food Biotechnology, Industrial Microbiology, Fermentation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2883
12691 Detecting Tomato Flowers in Greenhouses Using Computer Vision

Authors: Dor Oppenheim, Yael Edan, Guy Shani

Abstract:

This paper presents an image analysis algorithm to detect and count yellow tomato flowers in a greenhouse with uneven illumination conditions, complex growth conditions and different flower sizes. The algorithm is designed to be employed on a drone that flies in greenhouses to accomplish several tasks such as pollination and yield estimation. Detecting the flowers can provide useful information for the farmer, such as the number of flowers in a row, and the number of flowers that were pollinated since the last visit to the row. The developed algorithm is designed to handle the real world difficulties in a greenhouse which include varying lighting conditions, shadowing, and occlusion, while considering the computational limitations of the simple processor in the drone. The algorithm identifies flowers using an adaptive global threshold, segmentation over the HSV color space, and morphological cues. The adaptive threshold divides the images into darker and lighter images. Then, segmentation on the hue, saturation and volume is performed accordingly, and classification is done according to size and location of the flowers. 1069 images of greenhouse tomato flowers were acquired in a commercial greenhouse in Israel, using two different RGB Cameras – an LG G4 smartphone and a Canon PowerShot A590. The images were acquired from multiple angles and distances and were sampled manually at various periods along the day to obtain varying lighting conditions. Ground truth was created by manually tagging approximately 25,000 individual flowers in the images. Sensitivity analyses on the acquisition angle of the images, periods throughout the day, different cameras and thresholding types were performed. Precision, recall and their derived F1 score were calculated. Results indicate better performance for the view angle facing the flowers than any other angle. Acquiring images in the afternoon resulted with the best precision and recall results. Applying a global adaptive threshold improved the median F1 score by 3%. Results showed no difference between the two cameras used. Using hue values of 0.12-0.18 in the segmentation process provided the best results in precision and recall, and the best F1 score. The precision and recall average for all the images when using these values was 74% and 75% respectively with an F1 score of 0.73. Further analysis showed a 5% increase in precision and recall when analyzing images acquired in the afternoon and from the front viewpoint.

Keywords: Agricultural engineering, computer vision, image processing, flower detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2370