Search results for: full-potential KKR-green’s function method
21403 Estimation of Particle Size Distribution Using Magnetization Data
Authors: Navneet Kaur, S. D. Tiwari
Abstract:
Magnetic nanoparticles possess fascinating properties which make their behavior unique in comparison to corresponding bulk materials. Superparamagnetism is one such interesting phenomenon exhibited only by small particles of magnetic materials. In this state, the thermal energy of particles become more than their magnetic anisotropy energy, and so particle magnetic moment vectors fluctuate between states of minimum energy. This situation is similar to paramagnetism of non-interacting ions and termed as superparamagnetism. The magnetization of such systems has been described by Langevin function. But, the estimated fit parameters, in this case, are found to be unphysical. It is due to non-consideration of particle size distribution. In this work, analysis of magnetization data on NiO nanoparticles is presented considering the effect of particle size distribution. Nanoparticles of NiO of two different sizes are prepared by heating freshly synthesized Ni(OH)₂ at different temperatures. Room temperature X-ray diffraction patterns confirm the formation of single phase of NiO. The diffraction lines are seen to be quite broad indicating the nanocrystalline nature of the samples. The average crystallite size are estimated to be about 6 and 8 nm. The samples are also characterized by transmission electron microscope. Magnetization of both sample is measured as function of temperature and applied magnetic field. Zero field cooled and field cooled magnetization are measured as a function of temperature to determine the bifurcation temperature. The magnetization is also measured at several temperatures in superparamagnetic region. The data are fitted to an appropriate expression considering a distribution in particle size following a least square fit procedure. The computer codes are written in PYTHON. The presented analysis is found to be very useful for estimating the particle size distribution present in the samples. The estimated distributions are compared with those determined from transmission electron micrographs.Keywords: anisotropy, magnetization, nanoparticles, superparamagnetism
Procedia PDF Downloads 14321402 A Hybrid Adomian Decomposition Method in the Solution of Logistic Abelian Ordinary Differential and Its Comparism with Some Standard Numerical Scheme
Authors: F. J. Adeyeye, D. Eni, K. M. Okedoye
Abstract:
In this paper we present a Hybrid of Adomian decomposition method (ADM). This is the substitution of a One-step method of Taylor’s series approximation of orders I and II, into the nonlinear part of Adomian decomposition method resulting in a convergent series scheme. This scheme is applied to solve some Logistic problems represented as Abelian differential equation and the results are compared with the actual solution and Runge-kutta of order IV in order to ascertain the accuracy and efficiency of the scheme. The findings shows that the scheme is efficient enough to solve logistic problems considered in this paper.Keywords: Adomian decomposition method, nonlinear part, one-step method, Taylor series approximation, hybrid of Adomian polynomial, logistic problem, Malthusian parameter, Verhulst Model
Procedia PDF Downloads 40021401 Analysis of Radiation-Induced Liver Disease (RILD) and Evaluation of Relationship between Therapeutic Activity and Liver Clearance Rate with Tc-99m-Mebrofenin in Yttrium-90 Microspheres Treatment
Authors: H. Tanyildizi, M. Abuqebitah, I. Cavdar, M. Demir, L. Kabasakal
Abstract:
Aim: Whole liver radiation has the modest benefit in the treatment of unresectable hepatic metastases but the radiation doses must keep in control. Otherwise, RILD complications may arise. In this study, we aimed to calculate amount of maximum permissible activity (MPA) and critical organ absorbed doses with MIRD methodology, to evaluate tumour doses for treatment response and whole liver doses for RILD and to find optimal liver function test additionally. Materials and Methods: This study includes 29 patients who attended our nuclear medicine department suffering from Y-90 microspheres treatment. 10 mCi Tc-99m MAA was applied to the patients for dosimetry via IV. After the injection, whole body SPECT/CT images were taken in one hour. The minimum therapeutic tumour dose is on the point of being 120 Gy1, the amount of activities were calculated with MIRD methodology considering volumetric tumour/liver rate. A sub-working group was created with 11 patients randomly and liver clearance rate with Tc-99m-Mebrofenin was calculated according to Ekman formalism. Results: The volumetric tumour/liver rates were found between 33-66% (Maksimum Tolarable Dose (MTD) 48-52Gy3) for 4 patients, were found less than 33% (MTD 72Gy3) for 25 patients. According to these results the average amount of activity, mean liver dose and mean tumour dose were found 1793.9±1.46 MBq, 32.86±0.19 Gy, and 138.26±0.40 Gy. RILD was not observed in any patient. In sub-working group, the relationship between Bilirubin, Albumin, INR (which show presence of liver disease and its degree), liver clearance with Tc-99m-Mebrofenin and calculated activity amounts were found r=0.49, r=0.27, r=0.43, r=0.57, respectively. Discussions: The minimum tumour dose was found 120 Gy for positive dose-response relation. If volumetric tumour/liver rate was > 66%, dose 30 Gy; if volumetric tumour/liver rate 33-66%, dose escalation 48 Gy; if volumetric tumour/liver rate < 33%, dose 72 Gy. These dose limitations did not create RILD. Clearance measurement with Mebrofenin was concluded that the best method to determine the liver function. Therefore, liver clearance rate with Tc-99m-Mebrofenin should be considered in calculation of yttrium-90 microspheres dosimetry.Keywords: clearance, dosimetry, liver, RILD
Procedia PDF Downloads 44021400 Flashover Detection Algorithm Based on Mother Function
Authors: John A. Morales, Guillermo Guidi, B. M. Keune
Abstract:
Electric Power supply is a crucial topic for economic and social development. Power outages statistics show that discharges atmospherics are imperative phenomena to produce those outages. In this context, it is necessary to correctly detect when overhead line insulators are faulted. In this paper, an algorithm to detect if a lightning stroke generates or not permanent fault on insulator strings is proposed. On top of that, lightning stroke simulations developed by using the Alternative Transients Program, are used. Based on these insights, a novel approach is designed that depends on mother functions analysis corresponding to the given variance-covariance matrix. Signals registered at the insulator string are projected on corresponding axes by the means of Principal Component Analysis. By exploiting these new axes, it is possible to determine a flashover characteristic zone useful to a good insulation design. The proposed methodology for flashover detection extends the existing approaches for the analysis and study of lightning performance on transmission lines.Keywords: mother function, outages, lightning, sensitivity analysis
Procedia PDF Downloads 58621399 Forced Degradation Study of Rifaximin Formulated Tablets to Determine Stability Indicating Nature of High-Performance Liquid Chromatography Analytical Method
Authors: Abid Fida Masih
Abstract:
Forced degradation study of Rifaximin was conducted to determine the stability indicating potential of HPLC testing method for detection of Rifaximin in formulated tablets to be employed for quality control and stability testing. The questioned method applied with mobile phase methanol: water (70:30), 5µm, 250 x 4.6mm, C18 column, wavelength 293nm and flow rate of 1.0 ml/min. Forced degradation study was performed under oxidative, acidic, basic, thermal and photolytic conditions. The applied method successfully determined the degradation products after acidic and basic degradation without interfering with Rifaximin detection. Therefore, the method was said to be stability indicating and can be applied for quality control and stability testing of Rifaxmin tablets during its shelf life.Keywords: forced degradation, high-performance liquid chromatography, method validation, rifaximin, stability indicating method
Procedia PDF Downloads 31421398 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 25721397 Improved K-Means Clustering Algorithm Using RHadoop with Combiner
Authors: Ji Eun Shin, Dong Hoon Lim
Abstract:
Data clustering is a common technique used in data analysis and is used in many applications, such as artificial intelligence, pattern recognition, economics, ecology, psychiatry and marketing. K-means clustering is a well-known clustering algorithm aiming to cluster a set of data points to a predefined number of clusters. In this paper, we implement K-means algorithm based on MapReduce framework with RHadoop to make the clustering method applicable to large scale data. RHadoop is a collection of R packages that allow users to manage and analyze data with Hadoop. The main idea is to introduce a combiner as a function of our map output to decrease the amount of data needed to be processed by reducers. The experimental results demonstrated that K-means algorithm using RHadoop can scale well and efficiently process large data sets on commodity hardware. We also showed that our K-means algorithm using RHadoop with combiner was faster than regular algorithm without combiner as the size of data set increases.Keywords: big data, combiner, K-means clustering, RHadoop
Procedia PDF Downloads 43821396 A Numerical Method for Diffusion and Cahn-Hilliard Equations on Evolving Spherical Surfaces
Authors: Jyh-Yang Wu, Sheng-Gwo Chen
Abstract:
In this paper, we present a simple effective numerical geometric method to estimate the divergence of a vector field over a curved surface. The conservation law is an important principle in physics and mathematics. However, many well-known numerical methods for solving diffusion equations do not obey conservation laws. Our presented method in this paper combines the divergence theorem with a generalized finite difference method and obeys the conservation law on discrete closed surfaces. We use the similar method to solve the Cahn-Hilliard equations on evolving spherical surfaces and observe stability results in our numerical simulations.Keywords: conservation laws, diffusion equations, Cahn-Hilliard equations, evolving surfaces
Procedia PDF Downloads 49421395 Development of Fault Diagnosis Technology for Power System Based on Smart Meter
Authors: Chih-Chieh Yang, Chung-Neng Huang
Abstract:
In power system, how to improve the fault diagnosis technology of transmission line has always been the primary goal of power grid operators. In recent years, due to the rise of green energy, the addition of all kinds of distributed power also has an impact on the stability of the power system. Because the smart meters are with the function of data recording and bidirectional transmission, the adaptive Fuzzy Neural inference system, ANFIS, as well as the artificial intelligence that has the characteristics of learning and estimation in artificial intelligence. For transmission network, in order to avoid misjudgment of the fault type and location due to the input of these unstable power sources, combined with the above advantages of smart meter and ANFIS, a method for identifying fault types and location of faults is proposed in this study. In ANFIS training, the bus voltage and current information collected by smart meters can be trained through the ANFIS tool in MATLAB to generate fault codes to identify different types of faults and the location of faults. In addition, due to the uncertainty of distributed generation, a wind power system is added to the transmission network to verify the diagnosis correctness of the study. Simulation results show that the method proposed in this study can correctly identify the fault type and location of fault with more efficiency, and can deal with the interference caused by the addition of unstable power sources.Keywords: ANFIS, fault diagnosis, power system, smart meter
Procedia PDF Downloads 13921394 Application of Deep Learning in Colorization of LiDAR-Derived Intensity Images
Authors: Edgardo V. Gubatanga Jr., Mark Joshua Salvacion
Abstract:
Most aerial LiDAR systems have accompanying aerial cameras in order to capture not only the terrain of the surveyed area but also its true-color appearance. However, the presence of atmospheric clouds, poor lighting conditions, and aerial camera problems during an aerial survey may cause absence of aerial photographs. These leave areas having terrain information but lacking aerial photographs. Intensity images can be derived from LiDAR data but they are only grayscale images. A deep learning model is developed to create a complex function in a form of a deep neural network relating the pixel values of LiDAR-derived intensity images and true-color images. This complex function can then be used to predict the true-color images of a certain area using intensity images from LiDAR data. The predicted true-color images do not necessarily need to be accurate compared to the real world. They are only intended to look realistic so that they can be used as base maps.Keywords: aerial LiDAR, colorization, deep learning, intensity images
Procedia PDF Downloads 16621393 Effect of Omega-3 Supplementation on Stunted Egyptian Children at Risk of Environmental Enteric Dysfunction: An Interventional Study
Authors: Ghada M. El-Kassas, Maged A. El Wakeel, Salwa R. El-Zayat
Abstract:
Background: Environmental enteric dysfunction (EED) is asymptomatic villous atrophy of the small bowel that is prevalent in the developing world and is associated with altered intestinal function and integrity. Evidence has suggested that supplementary omega-3 might ameliorate this damage by reducing gastrointestinal inflammation and may also benefit cognitive development. Objective: We tested whether omega-3 supplementation improves intestinal integrity, growth, and cognitive function in stunted children predicted to have EED. Methodology: 100 Egyptian stunted children aged 1-5 years and 100 age and gender-matched normal children as controls. At the primary phase of the study, we assessed anthropometric measures and fecal markers such as myeloperoxidase (MPO), neopterin (NEO), and alpha-1-anti-trypsin (AAT) (as predictors of EED). Cognitive development was assessed (Bayley or Wechsler scores). Oral n-3 (omega-3) LC-PUFA at a dosage of 500 mg/d was supplemented to all cases and followed up for 6 months after which the 2ry phase of the study included the previous clinical, laboratory and cognitive assessment. Results: Fecal inflammatory markers were significantly higher in cases compared to controls. (MPO), (NEO) and (AAT) showed a significant decline in cases at the end of the 2ry phase (P < 0.001 for all). Omega-3 supplementation resulted also in a significant increase in mid-upper arm circumference (MUAC) (P < 0.01), weight for age z-score, and skinfold thicknesses (P< 0.05 for both). Cases showed significant improvement of cognitive function at phase 2 of the study. Conclusions: Omega-3 supplementation successfully improved intestinal inflammatory state related to EED. Also, some improvement of anthropometric and cognitive parameters showed obvious improvement with omega-3 supplementation.Keywords: cognitive functions, EED, omega-3, stunting
Procedia PDF Downloads 15021392 Early Detection of Neuropathy in Leprosy-Comparing Clinical Tests with Nerve Conduction Study
Authors: Suchana Marahatta, Sabina Bhattarai, Bishnu Hari Paudel, Dilip Thakur
Abstract:
Background: Every year thousands of patients develop nerve damage and disabilities as a result of leprosy which can be prevented by early detection and treatment. So, early detection and treatment of nerve function impairment is of paramount importance in leprosy. Objectives: To assess the electrophysiological pattern of the peripheral nerves in leprosy patients and to compare it with clinical assessment tools. Materials and Methods: In this comparative cross-sectional study, 74 newly diagnosed leprosy patients without reaction were enrolled. They underwent thorough evaluation for peripheral nerve function impairment using clinical tests [i.e. nerve palpation (NP), monofilament (MF) testing, voluntary muscle testing (VMT)] and nerve conduction study (NCS). Clinical findings were compared with that of NCS using SPSS version 11.5. Results: NCS was impaired in 43.24% of leprosy patient at the baseline. Among them, sensory NCS was impaired in more patients (32.4%) in comparison to motor NCS (20.3%). NP, MF, and VMT were impaired in 58.1%, 25.7%, and 9.4% of the patients, respectively. Maximum concordance of monofilament testing and sensory NCS was found for sural nerve (14.7%). Likewise, the concordance of motor NP and motor NCS was the maximum for ulnar nerve (14.9%). When individual parameters of the NCS were considered, amplitude was found to be the most frequently affected parameter for both sensory and motor NCS. It was impaired in 100% of cases with abnormal NCS findings. Conclusion: Since there was no acceptable concordance between NCS findings and clinical findings, we should consider NCS whenever feasible for early detection of neuropathy in leprosy. The amplitude of both sensory nerve action potential (SNAP) and compound nerve action potential (CAMP) could be important determinants of the abnormal NCS if supported by further studies.Keywords: leprosy, nerve function impairment, neuropathy, nerve conduction study
Procedia PDF Downloads 31921391 Practical Method for Failure Prediction of Mg Alloy Sheets during Warm Forming Processes
Authors: Sang-Woo Kim, Young-Seon Lee
Abstract:
An important concern in metal forming, even at elevated temperatures, is whether a desired deformation can be accomplished without any failure of the material. A detailed understanding of the critical condition for crack initiation provides not only the workability limit of a material but also a guide-line for process design. This paper describes the utilization of ductile fracture criteria in conjunction with the finite element method (FEM) for predicting the onset of fracture in warm metal working processes of magnesium alloy sheets. Critical damage values for various ductile fracture criteria were determined from uniaxial tensile tests and were expressed as the function of strain rate and temperature. In order to find the best criterion for failure prediction, Erichsen cupping tests under isothermal conditions and FE simulations combined with ductile fracture criteria were carried out. Based on the plastic deformation histories obtained from the FE analyses of the Erichsen cupping tests and the critical damage value curves, the initiation time and location of fracture were predicted under a bi-axial tensile condition. The results were compared with experimental results and the best criterion was recommended. In addition, the proposed methodology was used to predict the onset of fracture in non-isothermal deep drawing processes using an irregular shaped blank, and the results were verified experimentally.Keywords: magnesium, AZ31 alloy, ductile fracture, FEM, sheet forming, Erichsen cupping test
Procedia PDF Downloads 37321390 Comparison of the Boundary Element Method and the Method of Fundamental Solutions for Analysis of Potential and Elasticity
Authors: S. Zenhari, M. R. Hematiyan, A. Khosravifard, M. R. Feizi
Abstract:
The boundary element method (BEM) and the method of fundamental solutions (MFS) are well-known fundamental solution-based methods for solving a variety of problems. Both methods are boundary-type techniques and can provide accurate results. In comparison to the finite element method (FEM), which is a domain-type method, the BEM and the MFS need less manual effort to solve a problem. The aim of this study is to compare the accuracy and reliability of the BEM and the MFS. This comparison is made for 2D potential and elasticity problems with different boundary and loading conditions. In the comparisons, both convex and concave domains are considered. Both linear and quadratic elements are employed for boundary element analysis of the examples. The discretization of the problem domain in the BEM, i.e., converting the boundary of the problem into boundary elements, is relatively simple; however, in the MFS, obtaining appropriate locations of collocation and source points needs more attention to obtain reliable solutions. The results obtained from the presented examples show that both methods lead to accurate solutions for convex domains, whereas the BEM is more suitable than the MFS for concave domains.Keywords: boundary element method, method of fundamental solutions, elasticity, potential problem, convex domain, concave domain
Procedia PDF Downloads 9021389 Isolation, Identification and Characterization of the Bacteria and Yeast from the Fermented Stevia Extract
Authors: Asato Takaishi, Masashi Nasuhara, Ayuko Itsuki, Kenichi Suga
Abstract:
Stevia (Stevia rebaudiana Bertoni) is a composite plant native to Paraguay. Stevia sweetener is derived from a hot water extract of Stevia (Stevia extract), which has some effects such as histamine decomposition, antioxidative effect, and blood sugar level-lowering function. The steviol glycosides in the Stevia extract are considered to contribute to these effects. In addition, these effects increase by the fermentation. However, it takes a long time for fermentation of Stevia extract and the fermentation liquid sometimes decays during the fermentation process because natural fermentation method is used. The aim of this study is to perform the fermentation of Stevia extract in a shorter period, and to produce the fermentation liquid in stable quality. From the natural fermentation liquid of Stevia extract, the four strains of useful (good taste) microorganisms were isolated using dilution plate count method and some properties were determined. The base sequences of 16S rDNA and 28S rDNA revealed three bacteria (two Lactobacillus sp. and Microbacterium sp.) and one yeast (Issatchenkia sp.). This result has corresponded that several kinds of lactic bacterium such as Lactobacillus pentosus and Lactobacillus buchneri were isolated from Stevia leaves. Liquid chromatography/mass spectrometory (LC/MS/MS) and High-Performance Liquid Chromatography (HPLC) were used to determine the contents of steviol glycosides and neutral sugars. When these strains were cultured in the sterile Stevia extract, the steviol and stevioside were increased in the fermented Stevia extract. So, it was suggested that the rebaudioside A and the mixture of steviol glycosides in the Stevia extract were decomposed into stevioside and steviol by microbial metabolism.Keywords: fermentation, lactobacillus, Stevia, steviol glycosides, yeast
Procedia PDF Downloads 56421388 Cantilever Secant Pile Constructed in Sand: Numerical Comparative Study and Design Aids – Part II
Authors: Khaled R. Khater
Abstract:
All civil engineering projects include excavation work and therefore need some retaining structures. Cantilever secant pile walls are an economical supporting system up to 5.0-m depths. The parameters controlling wall tip displacement are the focus of this paper. So, two analysis techniques have been investigated and arbitrated. They are the conventional method and finite element analysis. Accordingly, two computer programs have been used, Excel sheet and Plaxis-2D. Two soil models have been used throughout this study. They are Mohr-Coulomb soil model and Isotropic Hardening soil models. During this study, two soil densities have been considered, i.e. loose and dense sand. Ten wall rigidities have been analyzed covering ranges of perfectly flexible to completely rigid walls. Three excavation depths, i.e. 3.0-m, 4.0-m and 5.0-m were tested to cover the practical range of secant piles. This work submits beneficial hints about secant piles to assist designers and specification committees. Also, finite element analysis, isotropic hardening, is recommended to be the fair judge when two designs conflict. A rational procedure using empirical equations has been suggested to upgrade the conventional method to predict wall tip displacement ‘δ’. Also, a reasonable limitation of ‘δ’ as a function of excavation depth, ‘h’ has been suggested. Also, it has been found that, after a certain penetration depth any further increase of it does not positively affect the wall tip displacement, i.e. over design and uneconomic.Keywords: design aids, numerical analysis, secant pile, Wall tip displacement
Procedia PDF Downloads 18921387 Speech Disorders as Predictors of Social Participation of Children with Cerebral Palsy in the Primary Schools of the Czech Republic
Authors: Marija Zulić, Vanda Hájková, Nina Brkić–Jovanović, Srećko Potić, Sanja Tomić
Abstract:
The name cerebral palsy comes from the word cerebrum, which means the brain and the word palsy, which means seizure, and essentially refers to the movement disorder. In the clinical picture of cerebral palsy, basic neuromotor disorders are associated with other various disorders: behavioural, intellectual, speech, sensory, epileptic seizures, and bone and joint deformities. Motor speech disorders are among the most common difficulties present in people with cerebral palsy. Social participation represents an interaction between an individual and their social environment. Quality of social participation of the students with cerebral palsy at school is an important indicator of their successful participation in adulthood. One of the most important skills for the undisturbed social participation is ability of good communication. The aim of the study was to determine relation between social participation of students with cerebral palsy and presence of their speech impairment in primary schools in the Czech Republic. The study was performed in the Czech Republic in mainstream schools and schools established for the pupils with special education needs. We analysed 75 children with cerebral palsy aged between six and twelve years attending up to sixth grade by using the first and the third part of the school function assessment questionnaire as the main instrument. The other instrument we used in the research is the Gross motor function classification system–five–level classification system, which measures degree of motor functions of children and youth with cerebral palsy. Funding for this study was provided by the Grant Agency of Charles University in Prague.Keywords: cerebral palsy, social participation, speech disorders, The Czech Republic, the school function assessment
Procedia PDF Downloads 28521386 Decrease in Olfactory Cortex Volume and Alterations in Caspase Expression in the Olfactory Bulb in the Pathogenesis of Alzheimer’s Disease
Authors: Majed Al Otaibi, Melissa Lessard-Beaudoin, Amel Loudghi, Raphael Chouinard-Watkins, Melanie Plourde, Frederic Calon, C. Alexandre Castellano, Stephen Cunnane, Helene Payette, Pierrette Gaudreau, Denis Gris, Rona K. Graham
Abstract:
Introduction: Alzheimer disease (AD) is a chronic disorder that affects millions of individuals worldwide. Symptoms include memory dysfunction, and also alterations in attention, planning, language and overall cognitive function. Olfactory dysfunction is a common symptom of several neurological disorders including AD. Studying the mechanisms underlying the olfactory dysfunction may therefore lead to the discovery of potential biomarkers and/or treatments for neurodegenerative diseases. Objectives: To determine if olfactory dysfunction predicts future cognitive impairment in the aging population and to characterize the olfactory system in a murine model expressing a genetic factor of AD. Method: For the human study, quantitative olfactory tests (UPSIT and OMT) have been done on 93 subjects (aged 80 to 94 years) from the Quebec Longitudinal Study on Nutrition and Successful Aging (NuAge) cohort accepting to participate in the ORCA secondary study. The telephone Modified Mini Mental State examination (t-MMSE) was used to assess cognition levels, and an olfactory self-report was also collected. In a separate cohort, olfactory cortical volume was calculated using MRI results from healthy old adults (n=25) and patients with AD (n=18) using the AAL single-subject atlas and performed with the PNEURO tool (PMOD 3.7). For the murine study, we are using Western blotting, RT-PCR and immunohistochemistry. Result: Human Study: Based on the self-report, 81% of the participants claimed to not suffer from any problem with olfaction. However, based on the UPSIT, 94% of those subjects showed a poor olfactory performance and different forms of microsmia. Moreover, the results confirm that olfactory function declines with age. We also detected a significant decrease in olfactory cortical volume in AD individuals compared to controls. Murine study: Preliminary data demonstrate there is a significant decrease in expression levels of the proform of caspase-3 and the caspase substrate STK3, in the olfactory bulb of mice expressing human APOE4 compared with controls. In addition, there is a significant decrease in the expression level of the caspase-9 proform and caspase-8 active fragment. Analysis of the mature neuron marker, NeuN, shows decreased expression levels of both isoforms. The data also suggest that Iba-1 immunostaining is increased in the olfactory bulb of APOE4 mice compared to wild type mice. Conclusions: The activation of caspase-3 may be the cause of the decreased levels of STK3 through caspase cleavage and may play role in the inflammation observed. In the clinical study, our results suggest that seniors are unaware of their olfactory function status and therefore it is not sufficient to measure olfaction using the self-report in the elderly. Studying olfactory function and cognitive performance in the aging population will help to discover biomarkers in the early stage of the AD.Keywords: Alzheimer's disease, APOE4, cognition, caspase, brain atrophy, neurodegenerative, olfactory dysfunction
Procedia PDF Downloads 25821385 A Comprehensive Method of Fault Detection and Isolation based on Testability Modeling Data
Authors: Junyou Shi, Weiwei Cui
Abstract:
Testability modeling is a commonly used method in testability design and analysis of system. A dependency matrix will be obtained from testability modeling, and we will give a quantitative evaluation about fault detection and isolation. Based on the dependency matrix, we can obtain the diagnosis tree. The tree provides the procedures of the fault detection and isolation. But the dependency matrix usually includes built-in test (BIT) and manual test in fact. BIT runs the test automatically and is not limited by the procedures. The method above cannot give a more efficient diagnosis and use the advantages of the BIT. A Comprehensive method of fault detection and isolation is proposed. This method combines the advantages of the BIT and Manual test by splitting the matrix. The result of the case study shows that the method is effective.Keywords: fault detection, fault isolation, testability modeling, BIT
Procedia PDF Downloads 33421384 Charge Trapping on a Single-wall Carbon Nanotube Thin-film Transistor with Several Electrode Metals for Memory Function Mimicking
Authors: Ameni Mahmoudi, Manel Troudi, Paolo Bondavalli, Nabil Sghaier
Abstract:
In this study, the charge storage on thin-film SWCNT transistors was investigated, and C-V hysteresis tests showed that interface charge trapping effects predominate the memory window. Two electrode materials were utilized to demonstrate that selecting the appropriate metal electrode clearly improves the conductivity and, consequently, the SWCNT thin-film’s memory effect. Because their work function is similar to that of thin-film carbon nanotubes, Ti contacts produce higher charge confinement and show greater charge storage than Pd contacts. For Pd-contact CNTFETs and CNTFETs with Ti electrodes, a sizable clockwise hysteresis window was seen in the dual sweep circle with a threshold voltage shift of V11.52V and V9.7V, respectively. The SWCNT thin-film based transistor is expected to have significant trapping and detrapping charges because of the large C-V hysteresis. We have found that the predicted stored charge density for CNTFETs with Ti contacts is approximately 4.01×10-2C.m-2, which is nearly twice as high as the charge density of the device with Pd contacts. We have shown that the amount of trapped charges can be changed by sweeping the range or Vgs rate. We also looked into the variation in the flat band voltage (V FB) vs. time in order to determine the carrier retention period in CNTFETs with Ti and Pd electrodes. The outcome shows that memorizing trapped charges is about 300 seconds, which is a crucial finding for memory function mimicking.Keywords: charge storage, thin-film SWCNT based transistors, C-V hysteresis, memory effect, trapping and detrapping charges, stored charge density, the carrier retention time
Procedia PDF Downloads 8121383 A Study on the Functional Safety Analysis of Stage Control System Based on International Electronical Committee 61508-2
Authors: Youn-Sung Kim, Hye-Mi Kim, Sang-Hoon Seo, Jaden Cha
Abstract:
This International standard IEC 61508 sets out a generic approach for all safety lifecycle activities for systems comprised of electrical/electronic/programmable electronic (E/E/PE) elements that are used to perform safety functions. The control unit in stage control system is safety related facilities to control state and speed for stage system running, and it performs safety-critical function by stage control system. The controller unit is part of safety loops corresponding to the IEC 61508 and classified as logic part in the safety loop. In this paper, we analyze using FMEDA (Failure Mode Effect and Diagnostic Analysis) to verification for fault tolerance methods and functional safety of control unit. Moreover, we determined SIL (Safety Integrity Level) for control unit according to the safety requirements defined in IEC 61508-2 based on an analyzed functional safety.Keywords: safety function, failure mode effect, IEC 61508-2, diagnostic analysis, stage control system
Procedia PDF Downloads 27821382 Bridge Construction and Type of Bridges and Their Construction Methods
Authors: Mokhtar Nikgoo
Abstract:
Definition of bridge: A bridge is a structure that allows people to pass through the communication road with two points. There are many different types of bridges, each of which is designed to perform a specific function. This article introduces the concept, history, components, uses, types, construction methods, selected factors, damage factors and principles of bridge maintenance. A bridge is a structure to cross a passage such as a water, valley or road without blocking another path underneath. This structure makes it possible to pass obstacles that are difficult or impossible to pass. There are different designs for bridge construction, each of which is used for a particular function and condition. In the old definition, a bridge is an arch over a river, valley, or any type of passage that makes traffic possible. But today, in the topic of urban management, the bridge is considered as a structure to cross physical barriers, so that while using space (not just the surface of the earth), it can facilitate the passage and access to places. The useful life of bridges may be between 30 and 80 years depending on the location and the materials used. But with proper maintenance and improvement, their life may last for hundreds of years.Keywords: bridge, road construction, surveying, transportation
Procedia PDF Downloads 51221381 A Study of Numerical Reaction-Diffusion Systems on Closed Surfaces
Authors: Mei-Hsiu Chi, Jyh-Yang Wu, Sheng-Gwo Chen
Abstract:
The diffusion-reaction equations are important Partial Differential Equations in mathematical biology, material science, physics, and so on. However, finding efficient numerical methods for diffusion-reaction systems on curved surfaces is still an important and difficult problem. The purpose of this paper is to present a convergent geometric method for solving the reaction-diffusion equations on closed surfaces by an O(r)-LTL configuration method. The O(r)-LTL configuration method combining the local tangential lifting technique and configuration equations is an effective method to estimate differential quantities on curved surfaces. Since estimating the Laplace-Beltrami operator is an important task for solving the reaction-diffusion equations on surfaces, we use the local tangential lifting method and a generalized finite difference method to approximate the Laplace-Beltrami operators and we solve this reaction-diffusion system on closed surfaces. Our method is not only conceptually simple, but also easy to implement.Keywords: closed surfaces, high-order approachs, numerical solutions, reaction-diffusion systems
Procedia PDF Downloads 37621380 Bivariate Time-to-Event Analysis with Copula-Based Cox Regression
Authors: Duhania O. Mahara, Santi W. Purnami, Aulia N. Fitria, Merissa N. Z. Wirontono, Revina Musfiroh, Shofi Andari, Sagiran Sagiran, Estiana Khoirunnisa, Wahyudi Widada
Abstract:
For assessing interventions in numerous disease areas, the use of multiple time-to-event outcomes is common. An individual might experience two different events called bivariate time-to-event data, the events may be correlated because it come from the same subject and also influenced by individual characteristics. The bivariate time-to-event case can be applied by copula-based bivariate Cox survival model, using the Clayton and Frank copulas to analyze the dependence structure of each event and also the covariates effect. By applying this method to modeling the recurrent event infection of hemodialysis insertion on chronic kidney disease (CKD) patients, from the AIC and BIC values we find that the Clayton copula model was the best model with Kendall’s Tau is (τ=0,02).Keywords: bivariate cox, bivariate event, copula function, survival copula
Procedia PDF Downloads 8221379 A Simple Autonomous Hovering and Operating Control of Multicopter Using Only Web Camera
Authors: Kazuya Sato, Toru Kasahara, Junji Kuroda
Abstract:
In this paper, an autonomous hovering control method of multicopter using only Web camera is proposed. Recently, various control method of an autonomous flight for multicopter are proposed. But, in the previously proposed methods, a motion capture system (i.e., OptiTrack) and laser range finder are often used to measure the position and posture of multicopter. To achieve an autonomous flight control of multicopter with simple equipment, we propose an autonomous flight control method using AR marker and Web camera. AR marker can measure the position of multicopter with Cartesian coordinate in three dimensional, then its position connects with aileron, elevator, and accelerator throttle operation. A simple PID control method is applied to the each operation and adjust the controller gains. Experimental result are given to show the effectiveness of our proposed method. Moreover, another simple operation method for autonomous flight control multicopter is also proposed.Keywords: autonomous hovering control, multicopter, Web camera, operation
Procedia PDF Downloads 56221378 Neural Network Based Compressor Flow Estimator in an Aircraft Vapor Cycle System
Authors: Justin Reverdi, Sixin Zhang, Serge Gratton, Said Aoues, Thomas Pellegrini
Abstract:
In Vapor Cycle Systems, the flow sensor plays a key role in different monitoring and control purposes. However, physical sensors can be expensive, inaccurate, heavy, cumbersome, or highly sensitive to vibrations, which is especially problematic when embedded into an aircraft. The conception of a virtual sensor based on other standard sensors is a good alternative. In this paper, a data-driven model using a Convolutional Neural Network is proposed to estimate the flow of the compressor. To fit the model to our dataset, we tested different loss functions. We show in our application that a Dynamic Time Warping based loss function called DILATE leads to better dynamical performance than the vanilla mean squared error (MSE) loss function. DILATE allows choosing a trade-off between static and dynamic performance.Keywords: deep learning, dynamic time warping, vapor cycle system, virtual sensor
Procedia PDF Downloads 14621377 Electrical Conductivity as Pedotransfer Function in the Determination of Sodium Adsorption Ratio in Soil System in Managing Micro Level Farming Practices in India: An Effective Low Cost Technology
Authors: Usha Loganathan, Haresh Pandya
Abstract:
Analysis and correlation of soil properties represent an important outset for precision agriculture and is currently promoted and implemented in the developed world. Establishing relationships among indices of soil salinity has always been a challenging task in salt affected soils necessitating unique approaches for their reclamation and management to sustain long term productivity of Soil. Soil salinity indices like Electrical Conductivity (EC) and Sodium Adsorption Ratio (SAR) are normally used to characterize soils as either sodic or saline sodic. Currently, Determination of Soil sodium adsorption ratio is a more accepted and reliable measure of soil salinity. However, it involves arduous and protracted laboratory investigations which demand evolving new and economical methods to determine SAR based on simple soil salinity index. A linear regression model to predict soil SAR from soil electrical conductivity has been developed and presented in this paper as per which, soil SAR could very well be worked out as a pedotransfer function of soil EC. The present study was carried out in Orathupalayam (11.09-11.11 N latitude and 74.54-77.59 E longitude) in the vicinity of Orathupalayam Reservoir of Noyyal River Basin, India, over a period of 3 consecutive years from September 2013 through February 2016 in different locations chosen randomly through different seasons. The research findings are discussed in the light of micro level farming practices in India and recommend determination of SAR as a low cost technology aiding in the effective management of salt affected agricultural land.Keywords: electrical conductivity, orathupalayam, pedotranfer function, sodium adsorption ratio
Procedia PDF Downloads 25421376 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 34321375 Application of Double Side Approach Method on Super Elliptical Winkler Plate
Authors: Hsiang-Wen Tang, Cheng-Ying Lo
Abstract:
In this study, the static behavior of super elliptical Winkler plate is analyzed by applying the double side approach method. The lack of information about super elliptical Winkler plates is the motivation of this study and we use the double side approach method to solve this problem because of its superior ability on efficiently treating problems with complex boundary shape. The double side approach method has the advantages of high accuracy, easy calculation procedure and less calculation load required. Most important of all, it can give the error bound of the approximate solution. The numerical results not only show that the double side approach method works well on this problem but also provide us the knowledge of static behavior of super elliptical Winkler plate in practical use.Keywords: super elliptical winkler plate, double side approach method, error bound, mechanic
Procedia PDF Downloads 35621374 Indoor Air Pollution and Reduced Lung Function in Biomass Exposed Women: A Cross Sectional Study in Pune District, India
Authors: Rasmila Kawan, Sanjay Juvekar, Sandeep Salvi, Gufran Beig, Rainer Sauerborn
Abstract:
Background: Indoor air pollution especially from the use of biomass fuels, remains a potentially large global health threat. The inefficient use of such fuels in poorly ventilated conditions results in high levels of indoor air pollution, most seriously affecting women and young children. Objectives: The main aim of this study was to measure and compare the lung function of the women exposed in the biomass fuels and LPG fuels and relate it to the indoor emission measured using a structured questionnaire, spirometer and filter based low volume samplers respectively. Methodology: This cross-sectional comparative study was conducted among the women (aged > 18 years) living in rural villages of Pune district who were not diagnosed of chronic pulmonary diseases or any other respiratory diseases and using biomass fuels or LPG for cooking for a minimum period of 5 years or more. Data collection was done from April to June 2017 in dry season. Spirometer was performed using the portable, battery-operated ultrasound Easy One spirometer (Spiro bank II, NDD Medical Technologies, Zurich, Switzerland) to determine the lung function over Forced expiratory volume. The primary outcome variable was forced expiratory volume in 1 second (FEV1). Secondary outcome was chronic obstruction pulmonary disease (post bronchodilator FEV1/ Forced Vital Capacity (FVC) < 70%) as defined by the Global Initiative for Obstructive Lung Disease. Potential confounders such as age, height, weight, smoking history, occupation, educational status were considered. Results: Preliminary results showed that the lung function of the women using Biomass fuels (FEV1/FVC = 85% ± 5.13) had comparatively reduced lung function than the LPG users (FEV1/FVC = 86.40% ± 5.32). The mean PM 2.5 mass concentration in the biomass user’s kitchen was 274.34 ± 314.90 and 85.04 ± 97.82 in the LPG user’s kitchen. Black carbon amount was found higher in the biomass users (black carbon = 46.71 ± 46.59 µg/m³) than LPG users (black carbon=11.08 ± 22.97 µg/m³). Most of the houses used separate kitchen. Almost all the houses that used the clean fuel like LPG had minimum amount of the particulate matter 2.5 which might be due to the background pollution and cross ventilation from the houses using biomass fuels. Conclusions: Therefore, there is an urgent need to adopt various strategies to improve indoor air quality. There is a lacking of current state of climate active pollutants emission from different stove designs and identify major deficiencies that need to be tackled. Moreover, the advancement in research tools, measuring technique in particular, is critical for researchers in developing countries to improve their capability to study the emissions for addressing the growing climate change and public health concerns.Keywords: black carbon, biomass fuels, indoor air pollution, lung function, particulate matter
Procedia PDF Downloads 174