Search results for: order driven market
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17373

Search results for: order driven market

9633 Fabrication of Cylindrical Silicon Nanowire-Embedded Field Effect Transistor Using Al2O3 Transfer Layer

Authors: Sang Hoon Lee, Tae Il Lee, Su Jeong Lee, Jae Min Myoung

Abstract:

In order to manufacture short gap single Si nanowire (NW) field effect transistor (FET) by imprinting and transferring method, we introduce the method using Al2O3 sacrificial layer. The diameters of cylindrical Si NW addressed between Au electrodes by dielectrophoretic (DEP) alignment method are controlled to 106, 128, and 148 nm. After imprinting and transfer process, cylindrical Si NW is embedded in PVP adhesive and dielectric layer. By curing transferred cylindrical Si NW and Au electrodes on PVP-coated p++ Si substrate with 200nm-thick SiO2, 3μm gap Si NW FET fabrication was completed. As the diameter of embedded Si NW increases, the mobility of FET increases from 80.51 to 121.24 cm2/V•s and the threshold voltage moves from –7.17 to –2.44 V because the ratio of surface to volume gets reduced.

Keywords: Al2O3 sacrificial transfer layer, cylindrical silicon nanowires, dielectrophorestic alignment, field effect transistor

Procedia PDF Downloads 443
9632 Comparative Analysis of Spectral Estimation Methods for Brain-Computer Interfaces

Authors: Rafik Djemili, Hocine Bourouba, M. C. Amara Korba

Abstract:

In this paper, we present a method in order to classify EEG signals for Brain-Computer Interfaces (BCI). EEG signals are first processed by means of spectral estimation methods to derive reliable features before classification step. Spectral estimation methods used are standard periodogram and the periodogram calculated by the Welch method; both methods are compared with Logarithm of Band Power (logBP) features. In the method proposed, we apply Linear Discriminant Analysis (LDA) followed by Support Vector Machine (SVM). Classification accuracy reached could be as high as 85%, which proves the effectiveness of classification of EEG signals based BCI using spectral methods.

Keywords: brain-computer interface, motor imagery, electroencephalogram, linear discriminant analysis, support vector machine

Procedia PDF Downloads 487
9631 Optimal Maintenance and Improvement Policies in Water Distribution System: Markov Decision Process Approach

Authors: Jong Woo Kim, Go Bong Choi, Sang Hwan Son, Dae Shik Kim, Jung Chul Suh, Jong Min Lee

Abstract:

The Markov Decision Process (MDP) based methodology is implemented in order to establish the optimal schedule which minimizes the cost. Formulation of MDP problem is presented using the information about the current state of pipe, improvement cost, failure cost and pipe deterioration model. The objective function and detailed algorithm of dynamic programming (DP) are modified due to the difficulty of implementing the conventional DP approaches. The optimal schedule derived from suggested model is compared to several policies via Monte Carlo simulation. Validity of the solution and improvement in computational time are proved.

Keywords: Markov decision processes, dynamic programming, Monte Carlo simulation, periodic replacement, Weibull distribution

Procedia PDF Downloads 410
9630 Effects of Some Fungicides on Mycelial Growth of Fusarium spp.

Authors: M. Djekoun, H. Berrebah, M. R. Djebar

Abstract:

Fusarium wilt is destructive disease of cereal crops with small grains. It affects yields but also the quality of the crop and economic losses arising are often very heavy. Chemical control is currently one of the most effective ways to fight against these diseases. In this study, the efficacy of three fungicides (tebuconazole, thiram, and fludioxonil-difenoconazole mixture) was tested. In vitro, on the phytopathogenic Fusarium spp. isolated from seeds of wheat. The active ingredients were tested at different concentrations: 0.06, 1.39, 2.79, 5.58, and 11.16 mg/l for tebuconazole, 0.035, 0.052, 0.105, 0.21, and 0.42 mg/l for thiram and finally, for the mixture fludioxonil-difenoconazole 4 concentrations were tested: 0.05, 0.1, 0.5 and 1 mg/l. Toxicity responses were expressed as effective concentration, which inhibits mycelial growth by 50%, (EC50). Of the three selected fungicides, thirame proved to be the most effective with EC50 value of the order of 0,15 mg/l followed by the mixture of fludioxonil-difenoconazole with 0,27mg/l and finally tebuconazole with a value of 3.79 mg/l.

Keywords: Fusarium spp., thiram, tebuconazole, fludioxonil, difenoconazole, percentage of inhibition, EC50

Procedia PDF Downloads 352
9629 Aerobic Bioprocess Control Using Artificial Intelligence Techniques

Authors: M. Caramihai, Irina Severin

Abstract:

This paper deals with the design of an intelligent control structure for a bioprocess of Hansenula polymorpha yeast cultivation. The objective of the process control is to produce biomass in a desired physiological state. The work demonstrates that the designed Hybrid Control Techniques (HCT) are able to recognize specific evolution bioprocess trajectories using neural networks trained specifically for this purpose, in order to estimate the model parameters and to adjust the overall bioprocess evolution through an expert system and a fuzzy structure. The design of the control algorithm as well as its tuning through realistic simulations is presented. Taking into consideration the synergism of different paradigms like fuzzy logic, neural network, and symbolic artificial intelligence (AI), in this paper we present a real and fulfilled intelligent control architecture with application in bioprocess control.

Keywords: bioprocess, intelligent control, neural nets, fuzzy structure, hybrid techniques

Procedia PDF Downloads 398
9628 A New Mathematical Method for Heart Attack Forecasting

Authors: Razi Khalafi

Abstract:

Myocardial Infarction (MI) or acute Myocardial Infarction (AMI), commonly known as a heart attack, occurs when blood flow stops to part of the heart causing damage to the heart muscle. An ECG can often show evidence of a previous heart attack or one that's in progress. The patterns on the ECG may indicate which part of your heart has been damaged, as well as the extent of the damage. In chaos theory, the correlation dimension is a measure of the dimensionality of the space occupied by a set of random points, often referred to as a type of fractal dimension. In this research by considering ECG signal as a random walk we work on forecasting the oncoming heart attack by analysing the ECG signals using the correlation dimension. In order to test the model a set of ECG signals for patients before and after heart attack was used and the strength of model for forecasting the behaviour of these signals were checked. Results show this methodology can forecast the ECG and accordingly heart attack with high accuracy.

Keywords: heart attack, ECG, random walk, correlation dimension, forecasting

Procedia PDF Downloads 488
9627 Investigation on the Behavior of Conventional Reinforced Coupling Beams

Authors: Akash K. Walunj, Dipendu Bhunia, Samarth Gupta, Prabhat Gupta

Abstract:

Coupled shear walls consist of two shear walls connected intermittently by beams along the height. The behavior of coupled shear walls is mainly governed by the coupling beams. The coupling beams are designed for ductile inelastic behavior in order to dissipate energy. The base of the shear walls may be designed for elastic or ductile inelastic behavior. The amount of energy dissipation depends on the yield moment capacity and plastic rotation capacity of the coupling beams. In this paper, an analytical model of coupling beam was developed to calculate the rotations and moment capacities of coupling beam with conventional reinforcement.

Keywords: design studies, computational model(s), case study/studies, modelling, coupling beam

Procedia PDF Downloads 460
9626 Pollutant Dispersion in Coastal Waters

Authors: Sonia Ben Hamza, Sabra Habli, Nejla Mahjoub Saïd, Hervé Bournot, Georges Le Palec

Abstract:

This paper spots light on the effect of a point source pollution on streams, stemming out from intentional release caused by unconscious facts. The consequences of such contamination on ecosystems are very serious. Accordingly, effective tools are highly demanded in this respect, which enable us to come across an accurate progress of pollutant and anticipate different measures to be applied in order to limit the degradation of the environmental surrounding. In this context, we are eager to model a pollutant dispersion of a free surface flow which is ejected by an outfall sewer of an urban sewerage network in coastal water taking into account the influence of climatic parameters on the spread of pollutant. Numerical results showed that pollutant dispersion is merely due to the presence of vortices and turbulence. Hence, it was realized that the pollutant spread in seawater is strongly correlated with climatic conditions in this region.

Keywords: coastal waters, numerical simulation, pollutant dispersion, turbulent flows

Procedia PDF Downloads 498
9625 Effect of Hydrostatic Stress on Yield Behavior of the High Density Polyethylene

Authors: Kamel Hachour, Lydia Sadeg, Djamel Sersab, Tassadit Bellahcen

Abstract:

The hydrostatic stress is, for polymers, a significant parameter which affects the yield behavior of these materials. In this work, we investigate the influence of this parameter on yield behavior of the high density polyethylene (hdpe). Some tests on specimens with diverse geometries are described in this paper. Uniaxial tests: tensile on notched round bar specimens with different curvature radii, compression on cylindrical specimens and simple shear on parallelepiped specimens were performed. Biaxial tests with various combinations of tensile/compressive and shear loading on butterfly specimens were also realized in order to determine the hydrostatic stress for different states of solicitation. The experimental results show that the yield stress is very affected by the hydrostatic stress developed in the material during solicitations.

Keywords: biaxial tests, hdpe, Hydrostatic stress, yield behavior

Procedia PDF Downloads 375
9624 Measuring Biobased Content of Building Materials Using Carbon-14 Testing

Authors: Haley Gershon

Abstract:

The transition from using fossil fuel-based building material to formulating eco-friendly and biobased building materials plays a key role in sustainable building. The growing demand on a global level for biobased materials in the building and construction industries heightens the importance of carbon-14 testing, an analytical method used to determine the percentage of biobased content that comprises a material’s ingredients. This presentation will focus on the use of carbon-14 analysis within the building materials sector. Carbon-14, also known as radiocarbon, is a weakly radioactive isotope present in all living organisms. Any fossil material older than 50,000 years will not contain any carbon-14 content. The radiocarbon method is thus used to determine the amount of carbon-14 content present in a given sample. Carbon-14 testing is performed according to ASTM D6866, a standard test method developed specifically for biobased content determination of material in solid, liquid, or gaseous form, which requires radiocarbon dating. Samples are combusted and converted into a solid graphite form and then pressed onto a metal disc and mounted onto a wheel of an accelerator mass spectrometer (AMS) machine for the analysis. The AMS instrument is used in order to count the amount of carbon-14 present. By submitting samples for carbon-14 analysis, manufacturers of building materials can confirm the biobased content of ingredients used. Biobased testing through carbon-14 analysis reports results as percent biobased content, indicating the percentage of ingredients coming from biomass sourced carbon versus fossil carbon. The analysis is performed according to standardized methods such as ASTM D6866, ISO 16620, and EN 16640. Products 100% sourced from plants, animals, or microbiological material are therefore 100% biobased, while products sourced only from fossil fuel material are 0% biobased. Any result in between 0% and 100% biobased indicates that there is a mixture of both biomass-derived and fossil fuel-derived sources. Furthermore, biobased testing for building materials allows manufacturers to submit eligible material for certification and eco-label programs such as the United States Department of Agriculture (USDA) BioPreferred Program. This program includes a voluntary labeling initiative for biobased products, in which companies may apply to receive and display the USDA Certified Biobased Product label, stating third-party verification and displaying a product’s percentage of biobased content. The USDA program includes a specific category for Building Materials. In order to qualify for the biobased certification under this product category, examples of product criteria that must be met include minimum 62% biobased content for wall coverings, minimum 25% biobased content for lumber, and a minimum 91% biobased content for floor coverings (non-carpet). As a result, consumers can easily identify plant-based products in the marketplace.

Keywords: carbon-14 testing, biobased, biobased content, radiocarbon dating, accelerator mass spectrometry, AMS, materials

Procedia PDF Downloads 150
9623 Hybrid Solutions in Physicochemical Processes for the Removal of Turbidity in Andean Reservoirs

Authors: María Cárdenas Gaudry, Gonzalo Ramces Fano Miranda

Abstract:

Sediment removal is very important in the purification of water, not only for reasons of visual perception but also because of its association with odor and taste problems. The Cuchoquesera reservoir, which is in the Andean region of Ayacucho (Peru) at an altitude of 3,740 meters above sea level, visually presents suspended particles and organic impurities indicating that it contains water of dubious quality to deduce that it is suitable for direct consumption of human beings. In order to quantitatively know the degree of impurities, water quality monitoring was carried out from February to August 2018, in which four sampling stations were established in the reservoir. The selected measured parameters were electrical conductivity, total dissolved solids, pH, color, turbidity, and sludge volume. The indicators of the studied parameters exceed the permissible limits except for electrical conductivity (190 μS/cm) and total dissolved solids (255 mg/L). In this investigation, the best combination and the optimal doses of reagents were determined that allowed the removal of sediments from the waters of the Cuchoquesera reservoir, through the physicochemical process of coagulation-flocculation. In order to improve this process during the rainy season, six combinations of reagents were evaluated, made up of three coagulants (ferric chloride, ferrous sulfate, and aluminum sulfate) and two natural flocculants: prickly pear powder (Opuntia ficus-indica) and tara gum (Caesalpinia spinoza). For each combination of reagents, jar tests were developed following the central composite experimental design (CCED), where the design factors were the doses of coagulant and flocculant and the initial turbidity. The results of the jar tests were adjusted to mathematical models, obtaining that to treat the water from the Cuchoquesera reservoir, with a turbidity of 150 UTN and a color of 137 U Pt-Co, 27.9 mg/L of the coagulant aluminum sulfate with 3 mg/L of the natural tara gum flocculant to produce a purified water quality of 1.7 UTN of turbidity and 3.2 U Pt-Co of apparent color. The estimated cost of the dose of coagulant and flocculant found was 0.22 USD/m³. This is how “grey-green” technologies can be used as a combination in nature-based solutions in water treatment, in this case, to achieve potability, making it more sustainable, especially economically, if green technology is available at the site of application of the nature-based hybrid solution. This research is a demonstration of the compatibility of natural coagulants/flocculants with other treatment technologies in the integrated/hybrid treatment process, such as the possibility of hybridizing natural coagulants with other types of coagulants.

Keywords: prickly pear powder, tara gum, nature-based solutions, aluminum sulfate, jar test, turbidity, coagulation, flocculation

Procedia PDF Downloads 92
9622 Hospice-Shared Care for a Child Patient Supported with Extracorporeal Membrane Oxygenation

Authors: Hsiao-Lin Fang

Abstract:

Every life is precious, and comprehensive care should be provided to individuals who are in the final stages of their lives. Hospice-shared care aims to provide optimal symptom control and palliative care to terminal (cancer) patients through the implementation of shared care, and to support patients and their families in making various physical and psychological adjustments in the face of death. This report examines a 10-year-boy diagnosed with Out-of-Hospital Cardiac Arrest (OHCA). The individual fainted when swimming at school and underwent 31 minutes of cardiopulmonary resuscitation (CPR). While receiving treatment at the hospital, the individual received extracorporeal membrane oxygenation(ECMO) due to unstable hemodynamics. Urgent cardiac catheterization found: Suspect acute fulminant myocarditis or underlying cardiomyopathy with acute decompensation, After the active rescue by the medical team, hemodynamics still showed only mean pressure value. With respect to the patient, interdepartmental hospice-shared care was implemented and a do-not-resuscitate (DNR) order was signed after family discussions were conducted. Assistance and instructions were provided as part of the comfort care process. A farewell gathering attended by the patient’s relatives, friends, teachers, and classmates was organized in an intensive care unit (ICU) in order to look back on the patient’s life and the beautiful memories that were created, as well as to alleviate the sorrow felt by family members, including the patient’s father and sister. For example, the patient was presented with drawings and accompanied to a garden to pick flowers. In this manner, the patient was able to say goodbye before death. Finally, the patient’s grandmother and father participated in the clinical hospice care and post-mortem care processes. A hospice-shared care clinician conducted regular follow-ups and provided care to the family of the deceased, supporting family members through the sorrowful period. Birth, old age, sickness, and death are the natural phases of human life. In recent years, growing attention has been paid to human-centered hospice care. Hospice care is individual holistic care provided by a professional team and it involves the provision of comprehensive care to a terminal patient. Hospice care aims to satisfy the physical, psychological, mental, and social needs of patients and their families. It does not involve the cessation of treatment but rather avoids the exacerbation or extension of the suffering endured by patients, thereby preserving the dignity and quality of life during the end-of-life period. Patients enjoy the company of others as they complete the last phase of their lives, and their families also receive guidance on how they can move on with their own lives after the patient’s death.

Keywords: hospice-shared care, extracorporeal membrane oxygenation (ECMO), hospice-shared care, child patient

Procedia PDF Downloads 128
9621 An Investigation of How Pre-Service Physics Teachers Perceived the Results of Buoyancy Force

Authors: Ersin Bozkurt, Şükran Erdoğan

Abstract:

The purpose of the study is to explore how pre-service teachers perceive buoyancy force effecting an object in a liquid and identify their misconceptions. Pre-service teachers were interviewed to reveal their understandings of an object's floating, suspending and sinking in a liquid. In addition, they were asked about how an object -given its features- moved when it is provided with an external force and when it is released. The so-called circumstances were questioned in a different planet contexts. For this aim, focused group interview method was used. Six focused groups were formed and video recorded during the interval. Each focused group comprised of five pre-service teachers. It was found out pre-service teachers have common misunderstanding and misconceptions. In order to eliminate this conceptual misunderstandings, conceptual change texts were developed and further suggestions were made.

Keywords: computer simulations, conceptual change texts, physics education, students’ misconceptions in physics

Procedia PDF Downloads 458
9620 Wally Feelings Test: Validity and Reliability Study

Authors: Gökhan Kayili, Ramazan Ari

Abstract:

In this research, it is aimed to be adapted Wally Feelings Test to Turkish children and performed the reliability and validity analysis of the test. The sampling of the research was composed of three to five year-old 699 Turkish preschoolers who are attending official and private nursery school. The schools selected with simple random sampling method by considering different socio economic conditions and different central district in Konya. In order to determine reliability of Wally Feelings Test, internal consistency coefficients (KR-20), split-half reliability and test- retest reliability analysis have been performed. During validation process construct validity, content/scope validity and concurrent/criterion validity were used. When validity and reliability of the test examined, it is seen that Wally Feelings Test is a valid and reliable instrument to evaluate three to five year old Turkish children’s understanding feeling skills.

Keywords: reliability, validity, wally feelings test, social sciences

Procedia PDF Downloads 524
9619 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 386
9618 Development and Experimental Validation of Coupled Flow-Aerosol Microphysics Model for Hot Wire Generator

Authors: K. Ghosh, S. N. Tripathi, Manish Joshi, Y. S. Mayya, Arshad Khan, B. K. Sapra

Abstract:

We have developed a CFD coupled aerosol microphysics model in the context of aerosol generation from a glowing wire. The governing equations can be solved implicitly for mass, momentum, energy transfer along with aerosol dynamics. The computationally efficient framework can simulate temporal behavior of total number concentration and number size distribution. This formulation uniquely couples standard K-Epsilon scheme with boundary layer model with detailed aerosol dynamics through residence time. This model uses measured temperatures (wire surface and axial/radial surroundings) and wire compositional data apart from other usual inputs for simulations. The model predictions show that bulk fluid motion and local heat distribution can significantly affect the aerosol behavior when the buoyancy effect in momentum transfer is considered. Buoyancy generated turbulence was found to be affecting parameters related to aerosol dynamics and transport as well. The model was validated by comparing simulated predictions with results obtained from six controlled experiments performed with a laboratory-made hot wire nanoparticle generator. Condensation particle counter (CPC) and scanning mobility particle sizer (SMPS) were used for measurement of total number concentration and number size distribution at the outlet of reactor cell during these experiments. Our model-predicted results were found to be in reasonable agreement with observed values. The developed model is fast (fully implicit) and numerically stable. It can be used specifically for applications in the context of the behavior of aerosol particles generated from glowing wire technique and in general for other similar large scale domains. Incorporation of CFD in aerosol microphysics framework provides a realistic platform to study natural convection driven systems/ applications. Aerosol dynamics sub-modules (nucleation, coagulation, wall deposition) have been coupled with Navier Stokes equations modified to include buoyancy coupled K-Epsilon turbulence model. Coupled flow-aerosol dynamics equation was solved numerically and in the implicit scheme. Wire composition and temperature (wire surface and cell domain) were obtained/measured, to be used as input for the model simulations. Model simulations showed a significant effect of fluid properties on the dynamics of aerosol particles. The role of buoyancy was highlighted by observation and interpretation of nucleation zones in the planes above the wire axis. The model was validated against measured temporal evolution, total number concentration and size distribution at the outlet of hot wire generator cell. Experimentally averaged and simulated total number concentrations were found to match closely, barring values at initial times. Steady-state number size distribution matched very well for sub 10 nm particle diameters while reasonable differences were noticed for higher size ranges. Although tuned specifically for the present context (i.e., aerosol generation from hotwire generator), the model can also be used for diverse applications, e.g., emission of particles from hot zones (chimneys, exhaust), fires and atmospheric cloud dynamics.

Keywords: nanoparticles, k-epsilon model, buoyancy, CFD, hot wire generator, aerosol dynamics

Procedia PDF Downloads 131
9617 Digital Cinema Watermarking State of Art and Comparison

Authors: H. Kelkoul, Y. Zaz

Abstract:

Nowadays, the vigorous popularity of video processing techniques has resulted in an explosive growth of multimedia data illegal use. So, watermarking security has received much more attention. The purpose of this paper is to explore some watermarking techniques in order to observe their specificities and select the finest methods to apply in digital cinema domain against movie piracy by creating an invisible watermark that includes the date, time and the place where the hacking was done. We have studied three principal watermarking techniques in the frequency domain: Spread spectrum, Wavelet transform domain and finally the digital cinema watermarking transform domain. In this paper, a detailed technique is presented where embedding is performed using direct sequence spread spectrum technique in DWT transform domain. Experiment results shows that the algorithm provides high robustness and good imperceptibility.

Keywords: digital cinema, watermarking, wavelet DWT, spread spectrum, JPEG2000 MPEG4

Procedia PDF Downloads 241
9616 Design of a Sliding Controller for Optical Disk Drives

Authors: Yu-Sheng Lu, Chung-Hsin Cheng, Shuen-Shing Jan

Abstract:

This paper presents the design and implementation of a sliding-mod controller for tracking servo of optical disk drives. The tracking servo is majorly subject to two disturbance sources: radial run-out and shock. The lateral run-out disturbance is mostly repeatable, and a model of such disturbance is incorporated into the controller design to effectively compensate for it. Meanwhile, as a shock disturbance is usually non-repeatable and unpredictable, the sliding-mode controller is employed for its robustness to abrupt perturbations. As a result, a sliding-mode controller design based on the internal model principle is tailored for tracking servo of optical disk drives in order to deal with these two major disturbances. Experimental comparative studies are conducted to investigate the effectiveness of the specially designed controller.

Keywords: mechatronics, optical disk drive, sliding-mode control, servo systems

Procedia PDF Downloads 364
9615 Contribution to Energy Management in Hybrid Energy Systems Based on Agents Coordination

Authors: Djamel Saba, Fatima Zohra Laallam, Brahim Berbaoui

Abstract:

This paper presents a contribution to the design of a multi-agent for the energy management system in a hybrid energy system (SEH). The multi-agent-based energy-coordination management system (MA-ECMS) is based mainly on coordination between agents. The agents share the tasks and exchange information through communications protocols to achieve the main goal. This intelligent system can fully manage the consumption and production or simply to make proposals for action he thinks is best. The initial step is to give a presentation for the system that we want to model in order to understand all the details as much as possible. In our case, it is to implement a system for simulating a process control of energy management.

Keywords: communications protocols, control process, energy management, hybrid energy system, modelization, multi-agents system, simulation

Procedia PDF Downloads 311
9614 An Optimized Method for Calculating the Linear and Nonlinear Response of SDOF System Subjected to an Arbitrary Base Excitation

Authors: Hossein Kabir, Mojtaba Sadeghi

Abstract:

Finding the linear and nonlinear responses of a typical single-degree-of-freedom system (SDOF) is always being regarded as a time-consuming process. This study attempts to provide modifications in the renowned Newmark method in order to make it more time efficient than it used to be and make it more accurate by modifying the system in its own non-linear state. The efficacy of the presented method is demonstrated by assigning three base excitations such as Tabas 1978, El Centro 1940, and MEXICO CITY/SCT 1985 earthquakes to a SDOF system, that is, SDOF, to compute the strength reduction factor, yield pseudo acceleration, and ductility factor.

Keywords: single-degree-of-freedom system (SDOF), linear acceleration method, nonlinear excited system, equivalent displacement method, equivalent energy method

Procedia PDF Downloads 310
9613 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning

Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher

Abstract:

Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.

Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping

Procedia PDF Downloads 118
9612 Consumer Innovativeness and Shopping Styles: An Empirical Study in Turkey

Authors: Hande Begum Bumin Doyduk, Elif Okan Yolbulan

Abstract:

Innovation is very important for success and competitiveness of countries, as well as business sectors and individuals' firms. In order to have successful and sustainable innovations, the other side of the game, consumers, should be aware of the innovations and appreciate them. In this study, the consumer innovativeness is focused and the relationship between motivated consumer innovativeness and consumer shopping styles is analyzed. Motivated consumer innovativeness scale by (Vandecasteele & Geuens, 2010) and consumer shopping styles scale by (Sproles & Kendall, 1986) is used. Data is analyzed by SPSS 20 program through realibility, factor, and correlation analysis. According to the findings of the study, there are strong positive relationships between hedonic innovativeness and recreational shopping style; social innovativeness and brand consciousness; cognitive innovativeness and price consciousness and functional innovativeness and perfectionistic high-quality conscious shopping styles.

Keywords: consumer innovativeness, consumer decision making, shopping styles, innovativeness

Procedia PDF Downloads 410
9611 Research on ARQ Transmission Technique in Mars Detection Telecommunications System

Authors: Zhongfei Cai, Hui He, Changsheng Li

Abstract:

This paper studied in the automatic repeat request (ARQ) transmission technique in Mars detection telecommunications system. An ARQ method applied to proximity-1 space link protocol was proposed by this paper. In order to ensure the efficiency of data reliable transmission, this ARQ method combined these different ARQ maneuvers characteristics. Considering the Mars detection communication environments, this paper analyzed the characteristics of the saturation throughput rate, packet dropping probability, average delay and energy efficiency with different ARQ algorithms. Combined thus results with the theories of ARQ transmission technique, an ARQ transmission project in Mars detection telecommunications system was established. The simulation results showed that this algorithm had excellent saturation throughput rate and energy efficiency with low complexity.

Keywords: ARQ, mars, CCSDS, proximity-1, deepspace

Procedia PDF Downloads 325
9610 Spectral Coherence Analysis between Grinding Interaction Forces and the Relative Motion of the Workpiece and the Cutting Tool

Authors: Abdulhamit Donder, Erhan Ilhan Konukseven

Abstract:

Grinding operation is performed in order to obtain desired surfaces precisely in machining process. The needed relative motion between the cutting tool and the workpiece is generally created either by the movement of the cutting tool or by the movement of the workpiece or by the movement of both of them as in our case. For all these cases, the coherence level between the movements and the interaction forces is a key influential parameter for efficient grinding. Therefore, in this work, spectral coherence analysis has been performed to investigate the coherence level between grinding interaction forces and the movement of the workpiece on our robotic-grinding experimental setup in METU Mechatronics Laboratory.

Keywords: coherence analysis, correlation, FFT, grinding, hanning window, machining, Piezo actuator, reverse arrangements test, spectral analysis

Procedia PDF Downloads 391
9609 Application of Fuzzy Multiple Criteria Decision Making for Flooded Risk Region Selection in Thailand

Authors: Waraporn Wimuktalop

Abstract:

This research will select regions which are vulnerable to flooding in different level. Mathematical principles will be systematically and rationally utilized as a tool to solve problems of selection the regions. Therefore the method called Multiple Criteria Decision Making (MCDM) has been chosen by having two analysis standards, TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) and AHP (Analytic Hierarchy Process). There are three criterions that have been considered in this research. The first criterion is climate which is the rainfall. The second criterion is geography which is the height above mean sea level. The last criterion is the land utilization which both forest and agriculture use. The study found that the South has the highest risk of flooding, then the East, the Centre, the North-East, the West and the North, respectively.

Keywords: multiple criteria decision making, TOPSIS, analytic hierarchy process, flooding

Procedia PDF Downloads 216
9608 RV Car Clinic as Cost-Effective Health Care

Authors: Dessy Arumsari, Ais Assana Athqiya, Mulyaminingrum

Abstract:

Healthcare in remote areas is one of the major concerns in Indonesia. Building hospitals in a nation of 18.000 islands with a larger-than-life bureaucracy and problems with corruption, a critical shortage of qualified medical professionals and well-heeled patients resigned to traveling abroad for health care is a hard feat to accomplish. To assuring that all populations have access to appropriate and cost-effective care, a new solution to tackle this problem is with the presence of RV Car Clinic. This car has a concept such as a walking hospital that provides health facilities inside it. All of the health professionals who work in RV Car Clinic will do the rotation for a year in order to the equitable distribution of health workers. We need to advocate the policy makers to help realize RV Car Clinic in remote areas. Health services can be disseminated by the present of RV Car Clinic. Summarily, the local communities can get cost effectively because RV Car Clinic will come to their place and serve the health services.

Keywords: health policy, health professional, remote areas, RV Car Clinic

Procedia PDF Downloads 276
9607 Investigation of the Effects of Simple Heating Processes on the Crystallization of Bi₂WO₆

Authors: Cisil Gulumser, Francesc Medina, Sevil Veli

Abstract:

In this study, the synthesis of photocatalytic Bi₂WO₆ was practiced with simple heating processes and the effects of these treatments on the production of the desired compound were investigated. For this purpose, experiments with Bi(NO₃)₃.5H₂O and H₂WO₄ precursors were carried out to synthesize Bi₂WO₆ by four different combinations. These four combinations were grouped in two main sets as ‘treated in microwave reactor’ and ‘directly filtrated’; additionally these main sets were grouped into two subsets as ‘calcined’ and ‘not calcined’. Calcination processes were conducted at temperatures of 400ᵒC, 600ᵒC, and 800ᵒC. X-ray diffraction (XRD) and environmental scanning electron microscopy (ESEM) analyses were performed in order to investigate the crystal structure of powdered product synthesized with each combination. The highest crystallization of produced compounds was observed for calcination at 600ᵒC from each main group.

Keywords: bismuth tungstate, crystallization, microwave, photocatalysts

Procedia PDF Downloads 164
9606 Evaluation of River Meander Geometry Using Uniform Excess Energy Theory and Effects of Climate Change on River Meandering

Authors: Youssef I. Hafez

Abstract:

Since ancient history rivers have been the fostering and favorite place for people and civilizations to live and exist along river banks. However, due to floods and droughts, especially sever conditions due to global warming and climate change, river channels are completely evolving and moving in the lateral direction changing their plan form either through straightening of curved reaches (meander cut-off) or increasing meandering curvature. The lateral shift or shrink of a river channel affects severely the river banks and the flood plain with tremendous impact on the surrounding environment. Therefore, understanding the formation and the continual processes of river channel meandering is of paramount importance. So far, in spite of the huge number of publications about river-meandering, there has not been a satisfactory theory or approach that provides a clear explanation of the formation of river meanders and the mechanics of their associated geometries. In particular two parameters are often needed to describe meander geometry. The first one is a scale parameter such as the meander arc length. The second is a shape parameter such as the maximum angle a meander path makes with the channel mean down path direction. These two parameters, if known, can determine the meander path and geometry as for example when they are incorporated in the well known sine-generated curve. In this study, a uniform excess energy theory is used to illustrate the origin and mechanics of formation of river meandering. This theory advocates that the longitudinal imbalance between the valley and channel slopes (with the former is greater than the second) leads to formation of curved meander channel in order to reduce the excess energy through its expenditure as transverse energy loss. Two relations are developed based on this theory; one for the determination of river channel radius of curvature at the bend apex (shape parameter) and the other for the determination of river channel sinuosity. The sinuosity equation tested very well when applied to existing available field data. In addition, existing model data were used to develop a relation between the meander arc length and the Darcy-Weisback friction factor. Then, the meander wave length was determined from the equations of the arc length and the sinuosity. The developed equation compared well with available field data. Effects of the transverse bed slope and grain size on river channel sinuosity are addressed. In addition, the concept of maximum channel sinuosity is introduced in order to explain the changes of river channel plan form due to changes in flow discharges and sediment loads induced by global warming and climate changes.

Keywords: river channel meandering, sinuosity, radius of curvature, meander arc length, uniform excess energy theory, transverse energy loss, transverse bed slope, flow discharges, sediment loads, grain size, climate change, global warming

Procedia PDF Downloads 211
9605 Sulfur Removal of Hydrocarbon Fuels Using Oxidative Desulfurization Enhanced by Fenton Process

Authors: Mahsa Ja’fari, Mohammad R. Khosravi-Nikou, Mohsen Motavassel

Abstract:

A comprehensive development towards the production of ultra-clean fuels as a feed stoke is getting to raise due to the increasing use of diesel fuels and global air pollution. Production of environmental-friendly fuels can be achievable by some limited single methods and most integrated ones. Oxidative desulfurization (ODS) presents vast ranges of technologies possessing suitable characteristics with regard to the Fenton process. Using toluene as a model fuel feed with dibenzothiophene (DBT) as a sulfur compound under various operating conditions is the attempt of this study. The results showed that this oxidative process followed a pseudo-first order kinetics. Removal efficiency of 77.43% is attained under reaction time of 40 minutes with (Fe+2/H2O2) molar ratio of 0.05 in acidic pH environment. In this research, temperature of 50 °C represented the most influential role in proceeding the reaction.

Keywords: design of experiment (DOE), dibenzothiophene (DBT), optimization, oxidative desulfurization (ODS)

Procedia PDF Downloads 207
9604 Application of a SubIval Numerical Solver for Fractional Circuits

Authors: Marcin Sowa

Abstract:

The paper discusses the subinterval-based numerical method for fractional derivative computations. It is now referred to by its acronym – SubIval. The basis of the method is briefly recalled. The ability of the method to be applied in time stepping solvers is discussed. The possibility of implementing a time step size adaptive solver is also mentioned. The solver is tested on a transient circuit example. In order to display the accuracy of the solver – the results have been compared with those obtained by means of a semi-analytical method called gcdAlpha. The time step size adaptive solver applying SubIval has been proven to be very accurate as the results are very close to the referential solution. The solver is currently able to solve FDE (fractional differential equations) with various derivative orders for each equation and any type of source time functions.

Keywords: numerical method, SubIval, fractional calculus, numerical solver, circuit analysis

Procedia PDF Downloads 193