Search results for: motor parameter estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4665

Search results for: motor parameter estimation

645 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups

Authors: Lily Ingsrisawang, Tasanee Nacharoen

Abstract:

Introduction: The problems of unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many research papers found that the performance of existing classifier tends to be biased towards the majority class. The k -nearest neighbors’ nonparametric discriminant analysis is one method that was proposed for classifying unbalanced classes with good performance. Hence, the methods of discriminant analysis are of interest to us in investigating misclassification error rates for class-imbalanced data of three diabetes risk groups. Objective: The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification application of class-imbalanced data of diabetes risk groups. Methods: Data from a healthy project for 599 staffs in a government hospital in Bangkok were obtained for the classification problem. The staffs were diagnosed into one of three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data along with the variables; diabetes risk group, age, gender, cholesterol, and BMI was analyzed and bootstrapped up to 50 and 100 samples, 599 observations per sample, for additional estimation of misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples show non-normality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. In finding the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions with three choices of (0.90:0.05:0.05), (0.80: 0.10: 0.10) or (0.70, 0.15, 0.15). Results: The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k = 3 or k = 4 and the prior probabilities of {non-risk:risk:diabetic} as {0.90:0.05:0.05} or {0.80:0.10:0.10} gave the smallest error rate of misclassification. Conclusion: The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.

Keywords: error rate, bootstrap, diabetes risk groups, k-nearest neighbors

Procedia PDF Downloads 436
644 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 132
643 Low Field Microwave Absorption and Magnetic Anisotropy in TM Co-Doped ZnO System

Authors: J. Das, T. S. Mahule, V. V. Srinivasu

Abstract:

Electron spin resonance (ESR) study at 9.45 GHz and a field modulation frequency of 100Hz was performed on bulk polycrystalline samples of Mn:TM (Fe/Ni) and Mn:RE (Gd/Sm) co doped ZnO samples with composition Zn1-xMn:TM/RE)xO synthesised by solid state reaction route and sintered at 500 0C temperature. The room temperature microwave absorption data collected by sweeping the DC magnetic field from -500 to 9500 G for the Mn:Fe and Mn:Ni co doped ZnO samples exhibit a rarely reported non resonant low field absorption (NRLFA) in addition to a strong absorption at around 3350G, usually associated with ferromagnetic resonance (FMR) satisfying Larmor’s relation due to absorption in the full saturation state. Observed low field absorption is distinct to ferromagnetic resonance even at low temperature and shows hysteresis. Interestingly, it shows a phase opposite with respect to the main ESR signal of the samples, which indicates that the low field absorption has a minimum value at zero magnetic field whereas the ESR signal has a maximum value. The major resonance peak as well as the peak corresponding to low field absorption exhibit asymmetric nature indicating magnetic anisotropy in the sample normally associated with intrinsic ferromagnetism. Anisotropy parameter for Mn:Ni codoped ZnO sample is noticed to be quite higher. The g values also support the presence of oxygen vacancies and clusters in the samples. These samples have shown room temperature ferromagnetism in the SQUID measurement. However, in rare earth (RE) co doped samples (Zn1-x (Mn: Gd/Sm)xO), which show paramagnetic behavior at room temperature, the low field microwave signals are not observed. As microwave currents due to itinerary electrons can lead to ohmic losses inside the sample, we speculate that more delocalized 3d electrons contributed from the TM dopants facilitate such microwave currents leading to the loss and hence absorption at the low field which is also supported by the increase in current with increased micro wave power. Besides, since Fe and Ni has intrinsic spin polarization with polarisability of around 45%, doping of Fe and Ni is expected to enhance the spin polarization related effect in ZnO. We emphasize that in this case Fe and Ni doping contribute to polarized current which interacts with the magnetization (spin) vector and get scattered giving rise to the absorption loss.

Keywords: co-doping, electron spin resonance, hysteresis, non-resonant microwave absorption

Procedia PDF Downloads 314
642 Monte Carlo Simulation of Thyroid Phantom Imaging Using Geant4-GATE

Authors: Parimalah Velo, Ahmad Zakaria

Abstract:

Introduction: Monte Carlo simulations of preclinical imaging systems allow opportunity to enable new research that could range from designing hardware up to discovery of new imaging application. The simulation system which could accurately model an imaging modality provides a platform for imaging developments that might be inconvenient in physical experiment systems due to the expense, unnecessary radiation exposures and technological difficulties. The aim of present study is to validate the Monte Carlo simulation of thyroid phantom imaging using Geant4-GATE for Siemen’s e-cam single head gamma camera. Upon the validation of the gamma camera simulation model by comparing physical characteristic such as energy resolution, spatial resolution, sensitivity, and dead time, the GATE simulation of thyroid phantom imaging is carried out. Methods: A thyroid phantom is defined geometrically which comprises of 2 lobes with 80mm in diameter, 1 hot spot, and 3 cold spots. This geometry accurately resembling the actual dimensions of thyroid phantom. A planar image of 500k counts with 128x128 matrix size was acquired using simulation model and in actual experimental setup. Upon image acquisition, quantitative image analysis was performed by investigating the total number of counts in image, the contrast of the image, radioactivity distributions on image and the dimension of hot spot. Algorithm for each quantification is described in detail. The difference in estimated and actual values for both simulation and experimental setup is analyzed for radioactivity distribution and dimension of hot spot. Results: The results show that the difference between contrast level of simulation image and experimental image is within 2%. The difference in the total count between simulation and actual study is 0.4%. The results of activity estimation show that the relative difference between estimated and actual activity for experimental and simulation is 4.62% and 3.03% respectively. The deviation in estimated diameter of hot spot for both simulation and experimental study are similar which is 0.5 pixel. In conclusion, the comparisons show good agreement between the simulation and experimental data.

Keywords: gamma camera, Geant4 application of tomographic emission (GATE), Monte Carlo, thyroid imaging

Procedia PDF Downloads 271
641 The Use of a Novel Visual Kinetic Demonstration Technique in Student Skill Acquisition of the Sellick Cricoid Force Manoeuvre

Authors: L. Nathaniel-Wurie

Abstract:

The Sellick manoeuvre a.k.a the application of cricoid force (CF), was first described by Brian Sellick in 1961. CF is the application of digital pressure against the cricoid cartilage with the intention of posterior force causing oesophageal compression against the vertebrae. This is designed to prevent passive regurgitation of gastric contents, which is a major cause of morbidity and mortality during emergency airway management inside and outside of the hospital. To the authors knowledge, there is no universally standardised training modality and, therefore, no reliable way to examine if there are appropriate outcomes. If force is not measured during training, how can one surmise that appropriate, accurate, or precise amounts of force are being used routinely. Poor homogeneity in teaching and untested outcomes will correlate with reduced efficacy and increased adverse effects. For this study, the accuracy of force delivery in trained professionals was tested, and outcomes contrasted against a novice control and a novice study group. In this study, 20 operating department practitioners were tested (with a mean experience of 5.3years of performing CF). Subsequent contrast with 40 novice students who were randomised into one of two arms. ‘Arm A’ were explained the procedure, then shown the procedure then asked to perform CF with the corresponding force measurement being taken three times. Arm B had the same process as arm A then before being tested, they had 10, and 30 Newtons applied to their hands to increase intuitive understanding of what the required force equated to, then were asked to apply the equivalent amount of force against a visible force metre and asked to hold that force for 20 seconds which allowed direct visualisation and correction of any over or under estimation. Following this, Arm B were then asked to perform the manoeuvre, and the force generated measured three times. This study shows that there is a wide distribution of force produced by trained professionals and novices performing the procedure for the first time. Our methodology for teaching the manoeuvre shows an improved accuracy, precision, and homogeneity within the group when compared to novices and even outperforms trained practitioners. In conclusion, if this methodology is adopted, it may correlate with higher clinical outcomes, less adverse events, and more successful airway management in critical medical scenarios.

Keywords: airway, cricoid, medical education, sellick

Procedia PDF Downloads 80
640 A Selective and Fast Hydrogen Sensor Using Doped-LaCrO₃ as Sensing Electrode

Authors: He Zhang, Jianxin Yi

Abstract:

As a clean energy, hydrogen shows many advantages such as renewability, high heat value, and extensive sources and may play an important role in the future society. However, hydrogen is a combustible gas because of its low ignition energy (0.02mJ) and wide explosive limit (4% ~ 74% in air). It is very likely to cause fire hazard or explosion once leakage is happened and not detected in time. Mixed-potential type sensor has attracted much attention in monitoring and detecting hydrogen due to its high response, simple support electronics and long-term stability. Typically, this kind of sensor is consisted of a sensing electrode (SE), a reference electrode (RE) and a solid electrolyte. The SE and RE materials usually display different electrocatalytic abilities to hydrogen. So hydrogen could be detected by measuring the EMF change between the two electrodes. Previous reports indicate that a high-performance sensing electrode is important for improving the sensing characteristics of the sensor. In this report, a planar type mixed-potential hydrogen sensor using La₀.₈Sr₀.₂Cr₀.₅Mn₀.₅O₃₋δ (LSCM) as SE, Pt as RE and yttria-stabilized zirconia (YSZ) as solid electrolyte was developed. The reason for selecting LSCM as sensing electrode is that it shows the high electrocatalytic ability to hydrogen in solid oxide fuel cells. The sensing performance of the fabricated LSCM/YSZ/Pt sensor was tested systemically. The experimental results show that the sensor displays high response to hydrogen. The response values for 100ppm and 1000ppm hydrogen at 450 ºC are -70 mV and -118 mV, respectively. The response time is an important parameter to evaluate a sensor. In this report, the sensor response time decreases with increasing hydrogen concentration and get saturated above 500ppm. The steady response time at 450 ºC is as short as 4s, indicating the sensor shows great potential in practical application to monitor hydrogen. An excellent response repeatability to 100ppm hydrogen at 450 ˚C and a good sensor reproducibility among three sensors were also observed. Meanwhile, the sensor exhibits excellent selectivity to hydrogen compared with several interfering gases such as NO₂, CH₄, CO, C₃H₈ and NH₃. Polarization curves were tested to investigate the sensing mechanism and the results indicated the sensor abide by the mixed-potential mechanism.

Keywords: fire hazard, H₂ sensor, mixed-potential, perovskite

Procedia PDF Downloads 186
639 Existential Affordances and Psychopathology: A Gibsonian Analysis of Dissociative Identity Disorder

Authors: S. Alina Wang

Abstract:

A Gibsonian approach is used to understand the existential dimensions of the human ecological niche. Then, this existential-Gibsonian framework is applied to rethinking Hacking’s historical analysis of multiple personality disorder. This research culminates in a generalized account of psychiatric illness from an enactivist lens. In conclusion, reflections on the implications of this account on approaches to psychiatric treatment are mentioned. J.J. Gibson’s theory of affordances centered on affordances of sensorimotor varieties, which guide basic behaviors relative to organisms’ vital needs and physiological capacities (1979). Later theorists, notably Neisser (1988) and Rietveld (2014), expanded on the theory of affordances to account for uniquely human activities relative to the emotional, intersubjective, cultural, and narrative aspects of the human ecological niche. This research shows that these affordances are structured by what Haugeland (1998) calls existential commitments, which draws on Heidegger’s notion of dasein (1927) and Merleau-Ponty’s account of existential freedom (1945). These commitments organize the existential affordances that fill an individual’s environment and guide their thoughts, emotions, and behaviors. This system of a priori existential commitments and a posteriori affordances is called existential enactivism. For humans, affordances do not only elicit motor responses and appear as objects with instrumental significance. Affordances also, and possibly primarily, determine so-called affective and cognitive activities and structure the wide range of kinds (e.g., instrumental, aesthetic, ethical) of significances of objects found in the world. Then existential enactivism is applied to understanding the psychiatric phenomenon of multiple personality disorder (precursor of the current diagnosis of dissociative identity disorder). A reinterpretation of Hacking’s (1998) insights into the history of this particular disorder and his generalizations on the constructed nature of most psychiatric illness is taken on. Enactivist approaches sensitive to existential phenomenology can provide a deeper understanding of these matters. Conceptualizing psychiatric illness as strictly a disorder in the head (whether parsed as a disorder of brain chemicals or meaning-making capacities encoded in psychological modules) is incomplete. Rather, psychiatric illness must also be understood as a disorder in the world, or in the interconnected networks of existential affordances that regulate one’s emotional, intersubjective, and narrative capacities. All of this suggests that an adequate account of psychiatric illness must involve (1) the affordances that are the sources of existential hindrance, (2) the existential commitments structuring these affordances, and (3) the conditions of these existential commitments. Approaches to treatment of psychiatric illness would be more effective by centering on the interruption of normalized behaviors corresponding to affordances targeted as sources of hindrance, the development of new existential commitments, and the practice of new behaviors that erect affordances relative to these reformed commitments.

Keywords: affordance, enaction, phenomenology, psychiatry, psychopathology

Procedia PDF Downloads 138
638 Toxic Chemicals from Industries into Pacific Biota. Investigation of Polychlorinated Biphenyls (PCBs), Dioxins (PCDD), Furans (PCDF) and Polybrominated Diphenyls (PBDE No. 47) in Tuna and Shellfish in Kiribati, Solomon Islands and the Fiji Islands

Authors: Waisea Votadroka, Bert Van Bavel

Abstract:

The most commonly consumed shellfish species produced in the Pacific, shellfish and tuna fish, were investigated for the occurrence of a range of brominated and chlorinated contaminants in order to establish current levels. Polychlorinated biphenyls (PCBs), polybrominated diphenyl ethers (PBDEs) and polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) were analysed in the muscle of tuna species Katsuwonis pelamis, yellow fin tuna, and shellfish species from the Fiji Islands. The investigation of polychlorinated biphenyls (PCBs), furans (PCDFs) and polybrominated diphenylethers (PBDE No.47) in tuna and shellfish in Kiribati, Solomon Islands and Fiji is necessary due to the lack of research data in the Pacific region. The health risks involved in the consumption of marine foods laced with toxic organo-chlorinated and brominated compounds makes in the analyses of these compounds in marine foods important particularly when Pacific communities rely on these resources as their main diet. The samples were homogenized in a motor with anhydrous sodium sulphate in the ratio of 1:3 (muscle) and 1:4-1:5 (roe and butter). The tuna and shellfish samples were homogenized and freeze dried at the sampling location at the Institute of Applied Science, Fiji. All samples were stored in amber glss jars at -18 ° C until extraction at Orebro University. PCDD/Fs, PCBs and pesticides were all analysed using an Autospec Ultina HRGC/HRMS operating at 10,000 resolutions with EI ionization at 35 eV. All the measurements were performed in the selective ion recording mode (SIR), monitoring the two most abundant ions of the molecular cluster (PCDD/Fs and PCBs). Results indicated that the Fiji Composite sample for Batissa violacea range 0.7-238.6 pg/g lipid; Fiji sample composite Anadara antiquate range 1.6 – 808.6 pg/g lipid; Solomon Islands Katsuwonis Pelamis 7.5-3770.7 pg/g lipid; Solomon Islands Yellow Fin tuna 2.1 -778.4 pg/g lipid; Kiribati Katsuwonis Pelamis 4.8-1410 pg/g lipids. The study has demonstrated that these species are good bio-indicators of the presence of these toxic organic pollutants in edible marine foods. Our results suggest that for pesticides levels, p,p-DDE is the most dominant for all the groups and seems to be highest at 565.48 pg/g lipid in composite Batissa violacea from Fiji. For PBDE no.47 in comparing all samples, the composite Batissa violacea from Fiji had the highest level of 118.20 pg/g lipid. Based upon this study, the contamination levels found in the study species were quite lower compared with levels reported in impacted ecosystems around the world

Keywords: polychlorinated biphenyl, polybrominated diphenylethers, pesticides, organoclorinated pesticides, PBDEs

Procedia PDF Downloads 383
637 An Analytical Formulation of Pure Shear Boundary Condition for Assessing the Response of Some Typical Sites in Mumbai

Authors: Raj Banerjee, Aniruddha Sengupta

Abstract:

An earthquake event, associated with a typical fault rupture, initiates at the source, propagates through a rock or soil medium and finally daylights at a surface which might be a populous city. The detrimental effects of an earthquake are often quantified in terms of the responses of superstructures resting on the soil. Hence, there is a need for the estimation of amplification of the bedrock motions due to the influence of local site conditions. In the present study, field borehole log data of Mangalwadi and Walkeswar sites in Mumbai city are considered. The data consists of variation of SPT N-value with the depth of soil. A correlation between shear wave velocity (Vₛ) and SPT N value for various soil profiles of Mumbai city has been developed using various existing correlations which is used further for site response analysis. MATLAB program is developed for studying the ground response analysis by performing two dimensional linear and equivalent linear analysis for some of the typical Mumbai soil sites using pure shear (Multi Point Constraint) boundary condition. The model is validated in linear elastic and equivalent linear domain using the popular commercial program, DEEPSOIL. Three actual earthquake motions are selected based on their frequency contents and durations and scaled to a PGA of 0.16g for the present ground response analyses. The results are presented in terms of peak acceleration time history with depth, peak shear strain time history with depth, Fourier amplitude versus frequency, response spectrum at the surface etc. The peak ground acceleration amplification factors are found to be about 2.374, 3.239 and 2.4245 for Mangalwadi site and 3.42, 3.39, 3.83 for Walkeswar site using 1979 Imperial Valley Earthquake, 1989 Loma Gilroy Earthquake and 1987 Whitter Narrows Earthquake, respectively. In the absence of any site-specific response spectrum for the chosen sites in Mumbai, the generated spectrum at the surface may be utilized for the design of any superstructure at these locations.

Keywords: deepsoil, ground response analysis, multi point constraint, response spectrum

Procedia PDF Downloads 180
636 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy

Authors: Erick Pruchnicki, Nikhil Padhye

Abstract:

Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.

Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials

Procedia PDF Downloads 113
635 Tea and Its Working Methodology in the Biomass Estimation of Poplar Species

Authors: Pratima Poudel, Austin Himes, Heidi Renninger, Eric McConnel

Abstract:

Populus spp. (poplar) are the fastest-growing trees in North America, making them ideal for a range of applications as they can achieve high yields on short rotations and regenerate by coppice. Furthermore, poplar undergoes biochemical conversion to fuels without complexity, making it one of the most promising, purpose-grown, woody perennial energy sources. Employing wood-based biomass for bioenergy offers numerous benefits, including reducing greenhouse gas (GHG) emissions compared to non-renewable traditional fuels, the preservation of robust forest ecosystems, and creating economic prospects for rural communities.In order to gain a better understanding of the potential use of poplar as a biomass feedstock for biofuel in the southeastern US, the conducted a techno-economic assessment (TEA). This assessment is an analytical approach that integrates technical and economic factors of a production system to evaluate its economic viability. the TEA specifically focused on a short rotation coppice system employing a single-pass cut-and-chip harvesting method for poplar. It encompassed all the costs associated with establishing dedicated poplar plantations, including land rent, site preparation, planting, fertilizers, and herbicides. Additionally, we performed a sensitivity analysis to evaluate how different costs can affect the economic performance of the poplar cropping system. This analysis aimed to determine the minimum average delivered selling price for one metric ton of biomass necessary to achieve a desired rate of return over the cropping period. To inform the TEA, data on the establishment, crop care activities, and crop yields were derived from a field study conducted at the Mississippi Agricultural and Forestry Experiment Station's Bearden Dairy Research Center in Oktibbeha County and Pontotoc Ridge-Flatwood Branch Experiment Station in Pontotoc County.

Keywords: biomass, populus species, sensitivity analysis, technoeconomic analysis

Procedia PDF Downloads 83
634 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators

Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros

Abstract:

Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.

Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis

Procedia PDF Downloads 142
633 Performance Validation of Model Predictive Control for Electrical Power Converters of a Grid Integrated Oscillating Water Column

Authors: G. Rajapakse, S. Jayasinghe, A. Fleming

Abstract:

This paper aims to experimentally validate the control strategy used for electrical power converters in grid integrated oscillating water column (OWC) wave energy converter (WEC). The particular OWC’s unidirectional air turbine-generator output power results in discrete large power pulses. Therefore, the system requires power conditioning prior to integrating to the grid. This is achieved by using a back to back power converter with an energy storage system. A Li-Ion battery energy storage is connected to the dc-link of the back-to-back converter using a bidirectional dc-dc converter. This arrangement decouples the system dynamics and mitigates the mismatch between supply and demand powers. All three electrical power converters used in the arrangement are controlled using finite control set-model predictive control (FCS-MPC) strategy. The rectifier controller is to regulate the speed of the turbine at a set rotational speed to uphold the air turbine at a desirable speed range under varying wave conditions. The inverter controller is to maintain the output power to the grid adhering to grid codes. The dc-dc bidirectional converter controller is to set the dc-link voltage at its reference value. The software modeling of the OWC system and FCS-MPC is carried out in the MATLAB/Simulink software using actual data and parameters obtained from a prototype unidirectional air-turbine OWC developed at Australian Maritime College (AMC). The hardware development and experimental validations are being carried out at AMC Electronic laboratory. The designed FCS-MPC for the power converters are separately coded in Code Composer Studio V8 and downloaded into separate Texas Instrument’s TIVA C Series EK-TM4C123GXL Launchpad Evaluation Boards with TM4C123GH6PMI microcontrollers (real-time control processors). Each microcontroller is used to drive 2kW 3-phase STEVAL-IHM028V2 evaluation board with an intelligent power module (STGIPS20C60). The power module consists of a 3-phase inverter bridge with 600V insulated gate bipolar transistors. Delta standard (ASDA-B2 series) servo drive/motor coupled to a 2kW permanent magnet synchronous generator is served as the turbine-generator. This lab-scale setup is used to obtain experimental results. The validation of the FCS-MPC is done by comparing these experimental results to the results obtained by MATLAB/Simulink software results in similar scenarios. The results show that under the proposed control scheme, the regulated variables follow their references accurately. This research confirms that FCS-MPC fits well into the power converter control of the OWC-WEC system with a Li-Ion battery energy storage.

Keywords: dc-dc bidirectional converter, finite control set-model predictive control, Li-ion battery energy storage, oscillating water column, wave energy converter

Procedia PDF Downloads 114
632 The Effect of Mindfulness-Based Interventions for Individuals with Tourette Syndrome: A Scoping Review

Authors: Ilana Singer, Anastasia Lučić, Julie Leclerc

Abstract:

Introduction: Tics, characterized by repetitive, sudden, non-voluntary motor movements or vocalizations, are prevalent in chronic tic disorder (CT) and Tourette Syndrome (TS). These neurodevelopmental disorders often coexist with various psychiatric conditions, leading to challenges and reduced quality of life. While medication in conjunction with behavioral interventions, such as Habit Reversal Training (HRT), Exposure Response Prevention (ERP), and Comprehensive Behavioral Intervention for Tics (CBIT), has shown efficacy, a significant proportion of patients experience persistent tics. Thus, innovative treatment approaches are necessary to improve therapeutic outcomes, such as mindfulness-based approaches. Nonetheless, the effectiveness of mindfulness-based interventions in the context of CT and TS remains understudied. Objective: The objective of this scoping review is to provide an overview of the current state of research on mindfulness-based interventions for CT and TS, identify knowledge and evidence gaps, discuss the effectiveness of mindfulness-based interventions with other treatment options, and discuss implications for clinical practice and policy development. Method: Using guidelines from Peters (2020) and the PRISMA-ScR, a scoping review was conducted. Multiple electronic databases were searched from inception until June 2023, including MEDLINE, EMBASE, PsychInfo, Global Health, PubMed, Web of Science, and Érudit. Inclusion criteria were applied to select relevant studies, and data extraction was independently performed by two reviewers. Results: Five papers were included in the study. Firstly, we found that mindfulness interventions were found to be effective in reducing anxiety and depression while enhancing overall well-being in individuals with tics. Furthermore, the review highlighted the potential role of mindfulness in enhancing functional connectivity within the Default Mode Network (DMN) as a compensatory function in TS patients. This suggests that mindfulness interventions may complement and support traditional therapeutic approaches, particularly HRT, by positively influencing brain networks associated with tic regulation and control. Conclusion: This scoping review contributes to the understanding of the effectiveness of mindfulness-based interventions in managing CT and TS. By identifying research gaps, this review can guide future investigations and interventions to improve outcomes for individuals with CT or TS. Overall, these findings emphasize the potential benefits of incorporating mindfulness-based interventions as a smaller subset within comprehensive treatment strategies. However, it is essential to acknowledge the limitations of this scoping review, such as the exclusion of a pre-established protocol and the limited number of studies available for inclusion. Further research and clinical exploration are necessary to better understand the specific mechanisms and optimal integration of mindfulness-based interventions with existing behavioral interventions for this population.

Keywords: scoping reviews, Tourette Syndrome, tics, mindfulness-based, therapy, intervention

Procedia PDF Downloads 84
631 Evaluation of the Self-Organizing Map and the Adaptive Neuro-Fuzzy Inference System Machine Learning Techniques for the Estimation of Crop Water Stress Index of Wheat under Varying Application of Irrigation Water Levels for Efficient Irrigation Scheduling

Authors: Aschalew C. Workneh, K. S. Hari Prasad, C. S. P. Ojha

Abstract:

The crop water stress index (CWSI) is a cost-effective, non-destructive, and simple technique for tracking the start of crop water stress. This study investigated the feasibility of CWSI derived from canopy temperature to detect the water status of wheat crops. Artificial intelligence (AI) techniques have become increasingly popular in recent years for determining CWSI. In this study, the performance of two AI techniques, adaptive neuro-fuzzy inference system (ANFIS) and self-organizing maps (SOM), are compared while determining the CWSI of paddy crops. Field experiments were conducted for varying irrigation water applications during two seasons in 2022 and 2023 at the irrigation field laboratory at the Civil Engineering Department, Indian Institute of Technology Roorkee, India. The ANFIS and SOM-simulated CWSI values were compared with the experimentally calculated CWSI (EP-CWSI). Multiple regression analysis was used to determine the upper and lower CWSI baselines. The upper CWSI baseline was found to be a function of crop height and wind speed, while the lower CWSI baseline was a function of crop height, air vapor pressure deficit, and wind speed. The performance of ANFIS and SOM were compared based on mean absolute error (MAE), mean bias error (MBE), root mean squared error (RMSE), index of agreement (d), Nash-Sutcliffe efficiency (NSE), and coefficient of correlation (R²). Both models successfully estimated the CWSI of the paddy crop with higher correlation coefficients and lower statistical errors. However, the ANFIS (R²=0.81, NSE=0.73, d=0.94, RMSE=0.04, MAE= 0.00-1.76 and MBE=-2.13-1.32) outperformed the SOM model (R²=0.77, NSE=0.68, d=0.90, RMSE=0.05, MAE= 0.00-2.13 and MBE=-2.29-1.45). Overall, the results suggest that ANFIS is a reliable tool for accurately determining CWSI in wheat crops compared to SOM.

Keywords: adaptive neuro-fuzzy inference system, canopy temperature, crop water stress index, self-organizing map, wheat

Procedia PDF Downloads 55
630 Numerical Study of the Breakdown of Surface Divergence Based Models for Interfacial Gas Transfer Velocity at Large Contamination Levels

Authors: Yasemin Akar, Jan G. Wissink, Herlina Herlina

Abstract:

The effect of various levels of contamination on the interfacial air–water gas transfer velocity is studied by Direct Numerical Simulation (DNS). The interfacial gas transfer is driven by isotropic turbulence, introduced at the bottom of the computational domain, diffusing upwards. The isotropic turbulence is generated in a separate, concurrently running the large-eddy simulation (LES). The flow fields in the main DNS and the LES are solved using fourth-order discretisations of convection and diffusion. To solve the transport of dissolved gases in water, a fifth-order-accurate WENO scheme is used for scalar convection combined with a fourth-order central discretisation for scalar diffusion. The damping effect of the surfactant contamination on the near surface (horizontal) velocities in the DNS is modelled using horizontal gradients of the surfactant concentration. An important parameter in this model, which corresponds to the level of contamination, is ReMa⁄We, where Re is the Reynolds number, Ma is the Marangoni number, and We is the Weber number. It was previously found that even small levels of contamination (ReMa⁄We small) lead to a significant drop in the interfacial gas transfer velocity KL. It is known that KL depends on both the Schmidt number Sc (ratio of the kinematic viscosity and the gas diffusivity in water) and the surface divergence β, i.e. K_L∝√(β⁄Sc). Previously it has been shown that this relation works well for surfaces with low to moderate contamination. However, it will break down for β close to zero. To study the validity of this dependence in the presence of surface contamination, simulations were carried out for ReMa⁄We=0,0.12,0.6,1.2,6,30 and Sc = 2, 4, 8, 16, 32. First, it will be shown that the scaling of KL with Sc remains valid also for larger ReMa⁄We. This is an important result that indicates that - for various levels of contamination - the numerical results obtained at low Schmidt numbers are also valid for significantly higher and more realistic Sc. Subsequently, it will be shown that - with increasing levels of ReMa⁄We - the dependency of KL on β begins to break down as the increased damping of near surface fluctuations results in an increased damping of β. Especially for large levels of contamination, this damping is so severe that KL is found to be underestimated significantly.

Keywords: contamination, gas transfer, surfactants, turbulence

Procedia PDF Downloads 300
629 Thickness-Tunable Optical, Magnetic, and Dielectric Response of Lithium Ferrite Thin Film Synthesized by Pulsed Laser Deposition

Authors: Prajna Paramita Mohapatra, Pamu Dobbidi

Abstract:

Lithium ferrite (LiFe5O8) has potential applications as a component of microwave magnetic devices such as circulators and monolithic integrated circuits. For efficient device applications, spinel ferrites in the form of thin films are highly required. It is necessary to improve their magnetic and dielectric behavior by optimizing the processing parameters during deposition. The lithium ferrite thin films are deposited on Pt/Si substrate using the pulsed laser deposition technique (PLD). As controlling the film thickness is the easiest parameter to tailor the strain, we deposited the thin films having different film thicknesses (160 nm, 200 nm, 240 nm) at oxygen partial pressure of 0.001 mbar. The formation of single phase with spinel structure (space group - P4132) is confirmed by the XRD pattern and the Rietveld analysis. The optical bandgap is decreased with the increase in thickness. FESEM confirmed the formation of uniform grains having well separated grain boundaries. Further, the film growth and the roughness are analyzed by AFM. The root-mean-square (RMS) surface roughness is decreased from 13.52 nm (160 nm) to 9.34 nm (240 nm). The room temperature magnetization is measured with a maximum field of 10 kOe. The saturation magnetization is enhanced monotonically with an increase in thickness. The magnetic resonance linewidth is obtained in the range of 450 – 780 Oe. The dielectric response is measured in the frequency range of 104 – 106 Hz and in the temperature range of 303 – 473 K. With an increase in frequency, the dielectric constant and the loss tangent of all the samples decreased continuously, which is a typical behavior of conventional dielectric material. The real part of the dielectric constant and the dielectric loss is increased with an increase in thickness. The contribution of grain and grain boundaries is also analyzed by employing the equivalent circuit model. The highest dielectric constant is obtained for the film having a thickness of 240 nm at 104 Hz. The obtained results demonstrate that desired response can be obtained by tailoring the film thickness for the microwave magnetic devices.

Keywords: PLD, optical response, thin films, magnetic response, dielectric response

Procedia PDF Downloads 98
628 Relationships Between the Petrophysical and Mechanical Properties of Rocks and Shear Wave Velocity

Authors: Anamika Sahu

Abstract:

The Himalayas, like many mountainous regions, is susceptible to multiple hazards. In recent times, the frequency of such disasters is continuously increasing due to extreme weather phenomena. These natural hazards are responsible for irreparable human and economic loss. The Indian Himalayas has repeatedly been ruptured by great earthquakes in the past and has the potential for a future large seismic event as it falls under the seismic gap. Damages caused by earthquakes are different in different localities. It is well known that, during earthquakes, damage to the structure is associated with the subsurface conditions and the quality of construction materials. So, for sustainable mountain development, prior estimation of site characterization will be valuable for designing and constructing the space area and for efficient mitigation of the seismic risk. Both geotechnical and geophysical investigation of the subsurface is required to describe the subsurface complexity. In mountainous regions, geophysical methods are gaining popularity as areas can be studied without disturbing the ground surface, and also these methods are time and cost-effective. The MASW method is used to calculate the Vs30. Vs30 is the average shear wave velocity for the top 30m of soil. Shear wave velocity is considered the best stiffness indicator, and the average of shear wave velocity up to 30 m is used in National Earthquake Hazards Reduction Program (NEHRP) provisions (BSSC,1994) and Uniform Building Code (UBC), 1997 classification. Parameters obtained through geotechnical investigation have been integrated with findings obtained through the subsurface geophysical survey. Joint interpretation has been used to establish inter-relationships among mineral constituents, various textural parameters, and unconfined compressive strength (UCS) with shear wave velocity. It is found that results obtained through the MASW method fitted well with the laboratory test. In both conditions, mineral constituents and textural parameters (grain size, grain shape, grain orientation, and degree of interlocking) control the petrophysical and mechanical properties of rocks and the behavior of shear wave velocity.

Keywords: MASW, mechanical, petrophysical, site characterization

Procedia PDF Downloads 86
627 Impact of Chess Intervention on Cognitive Functioning of Children

Authors: Ebenezer Joseph

Abstract:

Chess is a useful tool to enhance general and specific cognitive functioning in children. The present study aims to assess the impact of chess on cognitive in children and to measure the differential impact of socio-demographic factors like age and gender of the child on the effectiveness of the chess intervention.This research study used an experimental design to study the impact of the Training in Chess on the intelligence of children. The Pre-test Post-test Control Group Design was utilized. The research design involved two groups of children: an experimental group and a control group. The experimental group consisted of children who participated in the one-year Chess Training Intervention, while the control group participated in extra-curricular activities in school. The main independent variable was training in chess. Other independent variables were gender and age of the child. The dependent variable was the cognitive functioning of the child (as measured by IQ, working memory index, processing speed index, perceptual reasoning index, verbal comprehension index, numerical reasoning, verbal reasoning, non-verbal reasoning, social intelligence, language, conceptual thinking, memory, visual motor and creativity). The sample consisted of 200 children studying in Government and Private schools. Random sampling was utilized. The sample included both boys and girls falling in the age range 6 to 16 years. The experimental group consisted of 100 children (50 from Government schools and 50 from Private schools) with an equal representation of boys and girls. The control group similarly consisted of 100 children. The dependent variables were assessed using Binet-Kamat Test of Intelligence, Wechsler Intelligence Scale for Children - IV (India) and Wallach Kogan Creativity Test. The training methodology comprised Winning Moves Chess Learning Program - Episodes 1–22, lectures with the demonstration board, on-the-board playing and training, chess exercise through workbooks (Chess school 1A, Chess school 2, and tactics) and working with chess software. Further students games were mapped using chess software and the brain patterns of the child were understood. They were taught the ideas behind chess openings and exposure to classical games were also given. The children participated in mock as well as regular tournaments. Preliminary analysis carried out using independent t tests with 50 children indicates that chess training has led to significant increases in the intelligent quotient. Children in the experimental group have shown significant increases in composite scores like working memory and perceptual reasoning. Chess training has significantly enhanced the total creativity scores, line drawing and pattern meaning subscale scores. Systematically learning chess as part of school activities appears to have a broad spectrum of positive outcomes.

Keywords: chess, intelligence, creativity, children

Procedia PDF Downloads 258
626 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 297
625 Environmental Catalysts for Refining Technology Application: Reduction of CO Emission and Gasoline Sulphur in Fluid Catalytic Cracking Unit

Authors: Loganathan Kumaresan, Velusamy Chidambaram, Arumugam Velayutham Karthikeyani, Alex Cheru Pulikottil, Madhusudan Sau, Gurpreet Singh Kapur, Sankara Sri Venkata Ramakumar

Abstract:

Environmentally driven regulations throughout the world stipulate dramatic improvements in the quality of transportation fuels and refining operations. The exhaust gases like CO, NOx, and SOx from stationary sources (e.g., refinery) and motor vehicles contribute to a large extent for air pollution. The refining industry is under constant environmental pressure to achieve more rigorous standards on sulphur content in the fuel used in the transportation sector and other off-gas emissions. Fluid catalytic cracking unit (FCCU) is a major secondary process in refinery for gasoline and diesel production. CO-combustion promoter additive and gasoline sulphur reduction (GSR) additive are catalytic systems used in FCCU to assist the combustion of CO to CO₂ in the regenerator and regulate sulphur in gasoline faction respectively along with main FCC catalyst. Effectiveness of these catalysts is governed by the active metal used, its dispersion, the type of base material employed, and retention characteristics of additive in FCCU such as attrition resistance and density. The challenge is to have a high-density microsphere catalyst support for its retention and high activity of the active metals as these catalyst additives are used in low concentration compare to the main FCC catalyst. The present paper discusses in the first part development of high dense microsphere of nanocrystalline alumina by hydro-thermal method for CO combustion promoter application. Performance evaluation of additive was conducted under simulated regenerator conditions and shows CO combustion efficiency above 90%. The second part discusses the efficacy of a co-precipitation method for the generation of the active crystalline spinels of Zn, Mg, and Cu with aluminium oxides as an additive. The characterization and micro activity test using heavy combined hydrocarbon feedstock at FCC unit conditions for evaluating gasoline sulphur reduction activity are studied. These additives were characterized by X-Ray Diffraction, NH₃-TPD & N₂ sorption analysis, TPR analysis to establish structure-activity relationship. The reaction of sulphur removal mechanisms involving hydrogen transfer reaction, aromatization and alkylation functionalities are established to rank GSR additives for their activity, selectivity, and gasoline sulphur removal efficiency. The sulphur shifting in other liquid products such as heavy naphtha, light cycle oil, and clarified oil were also studied. PIONA analysis of liquid product reveals 20-40% reduction of sulphur in gasoline without compromising research octane number (RON) of gasoline and olefins content.

Keywords: hydrothermal, nanocrystalline, spinel, sulphur reduction

Procedia PDF Downloads 97
624 An Economic Study for Fish Production in Egypt

Authors: Manal Elsayed Elkheshin, Rasha Saleh Mansour, Mohamed Fawzy Mohamed Eldnasury, Mamdouh Elbadry Mohamed

Abstract:

This research Aims to identify the main factors affecting the production and the fish consumption in Egypt, through the econometric estimation for various forms functions of fish production and fish consumption during the period (1991-2014), as the aim of this research to forecast the production and the fish consumption in Egypt until 2020, through determine the best standard methods using (ARIMA).This research also aims to the economic feasibility of the production of fish in aquaculture farms study; investment cost and represents the value of land, buildings, equipment and irrigation. Aquaculture requires three types of fish (Tilapia, carp fish, and mullet fish), and the total area of the farm, about an acre. The annual Fish production from this project about 3.5 tons. The annual investment costs of about 50500 pounds, Find conclude that the project can repay the cost of their investments after about 4 years and 5 months, and therefore recommend the implementation of the project, and internal rate of return reached (IRR) of about 22.1%, where it is clear that the rate of large internal rate of return, and achieves pound invested in this project annual return is estimated at 22.1 pounds, more than the opportunity cost, so we recommend the need to implement the project.Recommendations:1. Increasing the fish agriculture to decrease the gap of animal protein. 2.Increasing the number of mechanism fishing boats, and the provision of transport equipped to maintain the quality of fish production. 3.Encourage and attract the local and foreign investments, providing advice to the investor on the aquaculture field. 4. Action newsletters awareness of the importance of these projects where these projects resulted in a net profit after recovery in less than five years, IRR amounted to about 23%, which is much more than the opportunity cost of a bank interest rate is about 7%, helping to create work and graduates opportunities, and contribute to the reduction of imports of the fish, and improve the performance of the food trade balance.

Keywords: equation model, individual share, red meat, consumption, production, endogenous variable, exogenous variable, financial performance evaluates fish culture, feasibility study, fish production, aquaculture

Procedia PDF Downloads 370
623 A Normalized Non-Stationary Wavelet Based Analysis Approach for a Computer Assisted Classification of Laryngoscopic High-Speed Video Recordings

Authors: Mona K. Fehling, Jakob Unger, Dietmar J. Hecker, Bernhard Schick, Joerg Lohscheller

Abstract:

Voice disorders origin from disturbances of the vibration patterns of the two vocal folds located within the human larynx. Consequently, the visual examination of vocal fold vibrations is an integral part within the clinical diagnostic process. For an objective analysis of the vocal fold vibration patterns, the two-dimensional vocal fold dynamics are captured during sustained phonation using an endoscopic high-speed camera. In this work, we present an approach allowing a fully automatic analysis of the high-speed video data including a computerized classification of healthy and pathological voices. The approach bases on a wavelet-based analysis of so-called phonovibrograms (PVG), which are extracted from the high-speed videos and comprise the entire two-dimensional vibration pattern of each vocal fold individually. Using a principal component analysis (PCA) strategy a low-dimensional feature set is computed from each phonovibrogram. From the PCA-space clinically relevant measures can be derived that quantify objectively vibration abnormalities. In the first part of the work it will be shown that, using a machine learning approach, the derived measures are suitable to distinguish automatically between healthy and pathological voices. Within the approach the formation of the PCA-space and consequently the extracted quantitative measures depend on the clinical data, which were used to compute the principle components. Therefore, in the second part of the work we proposed a strategy to achieve a normalization of the PCA-space by registering the PCA-space to a coordinate system using a set of synthetically generated vibration patterns. The results show that owing to the normalization step potential ambiguousness of the parameter space can be eliminated. The normalization further allows a direct comparison of research results, which bases on PCA-spaces obtained from different clinical subjects.

Keywords: Wavelet-based analysis, Multiscale product, normalization, computer assisted classification, high-speed laryngoscopy, vocal fold analysis, phonovibrogram

Procedia PDF Downloads 266
622 A Study on ZnO Nanoparticles Properties: An Integration of Rietveld Method and First-Principles Calculation

Authors: Kausar Harun, Ahmad Azmin Mohamad

Abstract:

Zinc oxide (ZnO) has been extensively used in optoelectronic devices, with recent interest as photoanode material in dye-sensitize solar cell. Numerous methods employed to experimentally synthesized ZnO, while some are theoretically-modeled. Both approaches provide information on ZnO properties, but theoretical calculation proved to be more accurate and timely effective. Thus, integration between these two methods is essential to intimately resemble the properties of synthesized ZnO. In this study, experimentally-grown ZnO nanoparticles were prepared by sol-gel storage method with zinc acetate dihydrate and methanol as precursor and solvent. A 1 M sodium hydroxide (NaOH) solution was used as stabilizer. The optimum time to produce ZnO nanoparticles were recorded as 12 hours. Phase and structural analysis showed that single phase ZnO produced with wurtzite hexagonal structure. Further work on quantitative analysis was done via Rietveld-refinement method to obtain structural and crystallite parameter such as lattice dimensions, space group, and atomic coordination. The lattice dimensions were a=b=3.2498Å and c=5.2068Å which were later used as main input in first-principles calculations. By applying density-functional theory (DFT) embedded in CASTEP computer code, the structure of synthesized ZnO was built and optimized using several exchange-correlation functionals. The generalized-gradient approximation functional with Perdew-Burke-Ernzerhof and Hubbard U corrections (GGA-PBE+U) showed the structure with lowest energy and lattice deviations. In this study, emphasize also given to the modification of valence electron energy level to overcome the underestimation in DFT calculation. Both Zn and O valance energy were fixed at Ud=8.3 eV and Up=7.3 eV, respectively. Hence, the following electronic and optical properties of synthesized ZnO were calculated based on GGA-PBE+U functional within ultrasoft-pseudopotential method. In conclusion, the incorporation of Rietveld analysis into first-principles calculation was valid as the resulting properties were comparable with those reported in literature. The time taken to evaluate certain properties via physical testing was then eliminated as the simulation could be done through computational method.

Keywords: density functional theory, first-principles, Rietveld-refinement, ZnO nanoparticles

Procedia PDF Downloads 309
621 Sexual Health And Male Fertility: Improving Sperm Health With Focus On Technology

Authors: Diana Peninger

Abstract:

Over 10% of couples in the U.S. have infertility problems, with roughly 40% traceable to the male partner. Yet, little attention has been given to improving men’s contribution to the conception process. One solution that is showing promise in increasing conception rates for IVF and other assisted reproductive technology treatments is a first-of-its-kind semen collection that has been engineered to mitigate sperm damage caused by traditional collection methods. Patients are able to collect semen at home and deliver to clinics within 48 hours for use in fertility analysis and treatment, with less stress and improved specimen viability. This abstract will share these findings along with expert insight and tips to help attendees understand the key role sperm collection plays in addressing and treating reproductive issues, while helping to improve patient outcomes and success. Our research was to determine if male reproductive outcomes can be increased by improving sperm specimen health with a focus on technology. We utilized a redesigned semen collection cup (patented as the Device for Improved Semen Collection/DISC—U.S. Patent 6864046 – known commercially as a ProteX) that met a series of physiological parameters. Previous research demonstrated significant improvement in semen perimeters (motility forward, progression, viability, and longevity) and overall sperm biochemistry when the DISC is used for collection. Animal studies have also shown dramatic increases in pregnancy rates. Our current study compares samples collected in the DISC, next-generation DISC (DISCng), and a standard specimen cup (SSC), dry, with the 1 mL measured amount of media and media in excess ( 5mL). Both human and animal testing will be included. With sperm counts declining at alarming rates due to environmental, lifestyle, and other health factors, accurate evaluations of sperm health are critical to understanding reproductive health, origins, and treatments of infertility. An increase in the health of the sperm as measured by extensive semen parameter analysis and improved semen parameters stable for 48 hours, expanding the processing time from 1 hour to 48 hours were also demonstrated.

Keywords: reprodutive, sperm, male, infertility

Procedia PDF Downloads 131
620 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 144
619 Optimizing Fermented Paper Production Using Spyrogira sp. Interpolating with Banana Pulp

Authors: Hadiatullah, T. S. D. Desak Ketut, A. A. Ayu, A. N. Isna, D. P. Ririn

Abstract:

Spirogyra sp. is genus of microalgae which has a high carbohydrate content that used as a best medium for bacterial fermentation to produce cellulose. This study objective to determine the effect of pulp banana in the fermented paper production process using Spirogyra sp. and characterizing of the paper product. The method includes the production of bacterial cellulose, assay of the effect fermented paper interpolating with banana pulp using Spirogyra sp., and the assay of paper characteristics include gram-mage paper, water assay absorption, thickness, power assay of tensile resistance, assay of tear resistance, density, and organoleptic assay. Experiments were carried out with completely randomized design with a variation of the concentration of sewage treatment in the fermented paper production interpolating banana pulp using Spirogyra sp. Each parameter data to be analyzed by Anova variance that continued by real difference test with an error rate of 5% using the SPSS. Nata production results indicate that different carbon sources (glucose and sugar) did not show any significant differences from cellulose parameters assay. Significantly different results only indicated for the control treatment. Although not significantly different from the addition of a carbon source, sugar showed higher potency to produce high cellulose. Based on characteristic assay of the fermented paper showed that the paper gram-mage indicated that the control treatment without interpolation of a carbon source and a banana pulp have better result than banana pulp interpolation. Results of control gram-mage is 260 gsm that show optimized by cardboard. While on paper gram-mage produced with the banana pulp interpolation is about 120-200 gsm that show optimized by magazine paper and art paper. Based on the density, weight, water absorption assays, and organoleptic assay of paper showing the highest results in the treatment of pulp banana interpolation with sugar source as carbon is 14.28 g/m2, 0.02 g and 0.041 g/cm2.minutes. The conclusion found that paper with nata material interpolating with sugar and banana pulp has the potential formulation to produce super-quality paper.

Keywords: cellulose, fermentation, grammage, paper, Spirogyra sp.

Procedia PDF Downloads 333
618 Comparison of Yb and Tm-Fiber Laser Cutting Processes of Fiber Reinforced Plastics

Authors: Oktay Celenk, Ugur Karanfil, Iskender Demir, Samir Lamrini, Jorg Neumann, Arif Demir

Abstract:

Due to its favourable material characteristics, fiber reinforced plastics are amongst the main topics of all actual lightweight construction megatrends. Especially in transportation trends ranging from aeronautics over the automotive industry to naval transportation (yachts, cruise liners) the expected economic and environmental impact is huge. In naval transportation components like yacht bodies, antenna masts, decorative structures like deck lamps, light houses and pool areas represent cheap and robust solutions. Commercially available laser tools like carbon dioxide gas lasers (CO₂), frequency tripled solid state UV lasers, and Neodymium-YAG (Nd:YAG) lasers can be used. These tools have emission wavelengths of 10 µm, 0.355 µm, and 1.064 µm, respectively. The scientific goal is first of all the generation of a parameter matrix for laser processing of each used material for a Tm-fiber laser system (wavelength 2 µm). These parameters are the heat affected zone, process gas pressure, work piece feed velocity, intensity, irradiation time etc. The results are compared with results obtained with well-known material processing lasers, such as a Yb-fiber lasers (wavelength 1 µm). Compared to the CO₂-laser, the Tm-laser offers essential advantages for future laser processes like cutting, welding, ablating for repair and drilling in composite part manufacturing (components of cruise liners, marine pipelines). Some of these are the possibility of beam delivery in a standard fused silica fiber which enables hand guided processing, eye safety which results from the wavelength, excellent beam quality and brilliance due to the fiber nature. There is one more feature that is economically absolutely important for boat, automotive and military projects manufacturing that the wavelength of 2 µm is highly absorbed by the plastic matrix and thus enables selective removal of it for repair procedures.

Keywords: Thulium (Tm) fiber laser, laser processing of fiber-reinforced plastics (FRP), composite, heat affected zone

Procedia PDF Downloads 193
617 The Grade Six Pupils' Learning Styles and Their Achievements and Difficulties on Fractions Based on Kolb's Model

Authors: Faiza Abdul Latip

Abstract:

One of the ultimate goals of any nation is to produce competitive manpower and this includes Philippines. Inclination in the field of Mathematics has a significant role in achieving this goal. However, Mathematics, as considered by most people, is the most difficult subject matter along with its topics to learn. This could be manifested from the low performance of students in national and international assessments. Educators have been widely using learning style models in identifying the way students learn. Moreover, it could be the frontline in knowing the difficulties held by each learner in a particular topic specifically concepts pertaining to fractions. However, as what many educators observed, students show difficulties in doing mathematical tasks and in great degree in dealing with fractions most specifically in the district of Datu Odin Sinsuat, Maguindanao. This study focused on the Datu Odin Sinsuat district grade six pupils’ learning styles along with their achievements and difficulties in learning concepts on fractions. Five hundred thirty-two pupils from ten different public elementary schools of the Datu Odin Sinsuat districts were purposively used as the respondents of the study. A descriptive research using the survey method was employed in this study. Quantitative analysis on the pupils’ learning styles on the Kolb’s Learning Style Inventory (KLSI) and scores on the mathematics diagnostic test on fraction concepts were made using this method. The simple frequency and percentage counts were used to analyze the pupils’ learning styles and their achievements on fractions. To determine the pupils’ difficulties in fractions, the index of difficulty on every item was determined. Lastly, the Kruskal-Wallis Test was used in determining the significant difference in the pupils’ achievements on fractions classified by their learning styles. This test was set at 0.05 level of significance. The minimum H-Value of 7.82 was used to determine the significance of the test. The results revealed that the pupils of Datu Odin Sinsuat districts learn fractions in varied ways as they are of different learning styles. However, their achievements in fractions are low regardless of their learning styles. Difficulties in learning fractions were found most in the area of Estimation, Comparing/Ordering, and Division Interpretation of Fractions. Most of the pupils find it very difficult to use fraction as a measure, compare or arrange series of fractions and use the concept of fraction as a quotient.

Keywords: difficulties in fraction, fraction, Kolb's model, learning styles

Procedia PDF Downloads 216
616 Analysis of Friction Stir Welding Process for Joining Aluminum Alloy

Authors: A. M. Khourshid, I. Sabry

Abstract:

Friction Stir Welding (FSW), a solid state joining technique, is widely being used for joining Al alloys for aerospace, marine automotive and many other applications of commercial importance. FSW were carried out using a vertical milling machine on Al 5083 alloy pipe. These pipe sections are relatively small in diameter, 5mm, and relatively thin walled, 2 mm. In this study, 5083 aluminum alloy pipe were welded as similar alloy joints using (FSW) process in order to investigate mechanical and microstructural properties .rotation speed 1400 r.p.m and weld speed 10,40,70 mm/min. In order to investigate the effect of welding speeds on mechanical properties, metallographic and mechanical tests were carried out on the welded areas. Vickers hardness profile and tensile tests of the joints as a metallurgical feasibility of friction stir welding for joining Al 6061 aluminum alloy welding was performed on pipe with different thickness 2, 3 and 4 mm,five rotational speeds (485,710,910,1120 and 1400) rpm and a traverse speed (4, 8 and 10)mm/min was applied. This work focuses on two methods such as artificial neural networks using software (pythia) and response surface methodology (RSM) to predict the tensile strength, the percentage of elongation and hardness of friction stir welded 6061 aluminum alloy. An artificial neural network (ANN) model was developed for the analysis of the friction stir welding parameters of 6061 pipe. The tensile strength, the percentage of elongation and hardness of weld joints were predicted by taking the parameters Tool rotation speed, material thickness and travel speed as a function. A comparison was made between measured and predicted data. Response surface methodology (RSM) also developed and the values obtained for the response Tensile strengths, the percentage of elongation and hardness are compared with measured values. The effect of FSW process parameter on mechanical properties of 6061 aluminum alloy has been analyzed in detail.

Keywords: friction stir welding (FSW), al alloys, mechanical properties, microstructure

Procedia PDF Downloads 464