Search results for: Taguchi method
589 Removal of Cationic Heavy Metal and HOC from Soil-Washed Water Using Activated Carbon
Authors: Chi Kyu Ahn, Young Mi Kim, Seung Han Woo, Jong Moon Park
Abstract:
Soil washing process with a surfactant solution is a potential technology for the rapid removal of hydrophobic organic compound (HOC) from soil. However, large amount of washed water would be produced during operation and this should be treated effectively by proper methods. The soil washed water for complex contaminated site with HOC and heavy metals might contain high amount of pollutants such as HOC and heavy metals as well as used surfactant. The heavy metals in the soil washed water have toxic effects on microbial activities thus these should be removed from the washed water before proceeding to a biological waste-water treatment system. Moreover, the used surfactant solutions are necessary to be recovered for reducing the soil washing operation cost. In order to simultaneously remove the heavy metals and HOC from soil-washed water, activated carbon (AC) was used in the present study. In an anionic-nonionic surfactant mixed solution, the Cd(II) and phenanthrene (PHE) were effectively removed by adsorption on activated carbon. The removal efficiency for Cd(II) was increased from 0.027 mmol-Cd/g-AC to 0.142 mmol-Cd/g-AC as the mole ratio of SDS increased in the presence of PHE. The adsorptive capacity of PHE was also increased according to the SDS mole ratio due to the decrement of molar solubilization ratios (MSR) for PHE in an anionic-nonionic surfactant mixture. The simultaneous adsorption of HOC and cationic heavy metals using activated carbon could be a useful method for surfactant recovery and the reduction of heavy metal toxicity in a surfactant-enhanced soil washing process.
Keywords: Activated carbon, Anionic-nonionic surfactant mixture, Cationic heavy metal, HOC, Soil washing
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1730588 Tele-Diagnosis System for Rural Thailand
Authors: C. Snae Namahoot, M. Brueckner
Abstract:
Thailand-s health system is challenged by the rising number of patients and decreasing ratio of medical practitioners/patients, especially in rural areas. This may tempt inexperienced GPs to rush through the process of anamnesis with the risk of incorrect diagnosis. Patients have to travel far to the hospital and wait for a long time presenting their case. Many patients try to cure themselves with traditional Thai medicine. Many countries are making use of the Internet for medical information gathering, distribution and storage. Telemedicine applications are a relatively new field of study in Thailand; the infrastructure of ICT had hampered widespread use of the Internet for using medical information. With recent improvements made health and technology professionals can work out novel applications and systems to help advance telemedicine for the benefit of the people. Here we explore the use of telemedicine for people with health problems in rural areas in Thailand and present a Telemedicine Diagnosis System for Rural Thailand (TEDIST) for diagnosing certain conditions that people with Internet access can use to establish contact with Community Health Centers, e.g. by mobile phone. The system uses a Web-based input method for individual patients- symptoms, which are taken by an expert system for the analysis of conditions and appropriate diseases. The analysis harnesses a knowledge base and a backward chaining component to find out, which health professionals should be presented with the case. Doctors have the opportunity to exchange emails or chat with the patients they are responsible for or other specialists. Patients- data are then stored in a Personal Health Record.Keywords: Biomedical engineering, data acquisition, expert system, information management system, and information retrieval.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2827587 sEMG Interface Design for Locomotion Identification
Authors: Rohit Gupta, Ravinder Agarwal
Abstract:
Surface electromyographic (sEMG) signal has the potential to identify the human activities and intention. This potential is further exploited to control the artificial limbs using the sEMG signal from residual limbs of amputees. The paper deals with the development of multichannel cost efficient sEMG signal interface for research application, along with evaluation of proposed class dependent statistical approach of the feature selection method. The sEMG signal acquisition interface was developed using ADS1298 of Texas Instruments, which is a front-end interface integrated circuit for ECG application. Further, the sEMG signal is recorded from two lower limb muscles for three locomotions namely: Plane Walk (PW), Stair Ascending (SA), Stair Descending (SD). A class dependent statistical approach is proposed for feature selection and also its performance is compared with 12 preexisting feature vectors. To make the study more extensive, performance of five different types of classifiers are compared. The outcome of the current piece of work proves the suitability of the proposed feature selection algorithm for locomotion recognition, as compared to other existing feature vectors. The SVM Classifier is found as the outperformed classifier among compared classifiers with an average recognition accuracy of 97.40%. Feature vector selection emerges as the most dominant factor affecting the classification performance as it holds 51.51% of the total variance in classification accuracy. The results demonstrate the potentials of the developed sEMG signal acquisition interface along with the proposed feature selection algorithm.Keywords: Classifiers, feature selection, locomotion, sEMG.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490586 Ear Protectors and Their Action in Protecting Hearing System of Workers against Occupational Noise
Authors: F. Forouharmajd, S. Pourabdian, N. Ziayi Ghahnavieh
Abstract:
For many years, the ear protectors have been used to preventing the audio and non-audio effects of received noise from occupation environments. Despite performing hearing protection programs, there are many people which still suffer from noise-induced hearing loss. This study was conducted with the aim of determination of human hearing system response to received noise and the effectiveness of ear protectors on preventing of noise-induced hearing loss. Sound pressure microphones were placed in a simulated ear canal. The severity of noise measured inside and outside of ear canal. The noise reduction values due to installing ear protectors were calculated in the octave band frequencies and LabVIEW programmer. The results of noise measurement inside and outside of ear canal showed a different in received sound levels by ear canal. The effectiveness of ear protectors has been considerably reduced for the low frequency limits. A change in resonance frequency also was observed after using ear protectors. The study indicated the ear canal structure may affect the received noise and it may lead a difference between the received sound from the measured sound by a sound level meter, and hearing system. It means the human hearing system may probably respond different from a sound level meter. Hearing protectors’ efficiency declines by increasing the noise levels, and thus, they are not suitable to protect workers against industrial noise particularly low frequency noise. Hearing protectors may be solely a reason to damaging of hearing system in a special frequency via changing of human hearing system acoustical structure. We need developing the subjective method of hearing protectors testing, because their evaluation is not designed based on industrial noise or in the field.
Keywords: Ear protector, hearing system, occupational noise, workers.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 759585 Bridge Analysis Structure under Human Induced Dynamic Load
Authors: O. Kratochvíl, J. Križan
Abstract:
The paper deals with the analysis of the dynamic response of footbridges under human - induced dynamic loads. This is a frequently occurring and often dominant load for footbridges as it stems from the very purpose of a footbridge - to convey pedestrian. Due to the emergence of new materials and advanced engineering technology, slender footbridges are increasingly becoming popular to satisfy the modern transportation needs and the aesthetical requirements of the society. These structures however are always lively with low stiffness, low mass, low damping and low natural frequencies. As a consequence, they are prone to vibration induced by human activities and can suffer severe vibration serviceability problems, particularly in the lateral direction. Pedestrian bridges are designed according to first and second limit states, these are the criteria involved in response to static design load. However, it is necessary to assess the dynamic response of bridge design load on pedestrians and assess it impact on the comfort of the user movement. Usually the load is considered a person or a small group which can be assumed in perfect motion synchronization. Already one person or small group can excite significant vibration of the deck. In order to calculate the dynamic response to the movement of people, designer needs available and suitable computational model and criteria. For the calculation program ANSYS based on finite element method was used.Keywords: Footbridge, dynamic analysis, vibration serviceability of footbridges, lateral vibration, stiffness, dynamic force, walking force, slender suspension footbridges, natural frequencies and vibration modes, rhythm jumping, normal walking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2663584 Hash Based Block Matching for Digital Evidence Image Files from Forensic Software Tools
Abstract:
Internet use, intelligent communication tools, and social media have all become an integral part of our daily life as a result of rapid developments in information technology. However, this widespread use increases crimes committed in the digital environment. Therefore, digital forensics, dealing with various crimes committed in digital environment, has become an important research topic. It is in the research scope of digital forensics to investigate digital evidences such as computer, cell phone, hard disk, DVD, etc. and to report whether it contains any crime related elements. There are many software and hardware tools developed for use in the digital evidence acquisition process. Today, the most widely used digital evidence investigation tools are based on the principle of finding all the data taken place in digital evidence that is matched with specified criteria and presenting it to the investigator (e.g. text files, files starting with letter A, etc.). Then, digital forensics experts carry out data analysis to figure out whether these data are related to a potential crime. Examination of a 1 TB hard disk may take hours or even days, depending on the expertise and experience of the examiner. In addition, it depends on examiner’s experience, and may change overall result involving in different cases overlooked. In this study, a hash-based matching and digital evidence evaluation method is proposed, and it is aimed to automatically classify the evidence containing criminal elements, thereby shortening the time of the digital evidence examination process and preventing human errors.
Keywords: Block matching, digital evidence, hash list.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357583 Multilevel Classifiers in Recognition of Handwritten Kannada Numerals
Authors: Dinesh Acharya U., N. V. Subba Reddy, Krishnamoorthi Makkithaya
Abstract:
The recognition of handwritten numeral is an important area of research for its applications in post office, banks and other organizations. This paper presents automatic recognition of handwritten Kannada numerals based on structural features. Five different types of features, namely, profile based 10-segment string, water reservoir; vertical and horizontal strokes, end points and average boundary length from the minimal bounding box are used in the recognition of numeral. The effect of each feature and their combination in the numeral classification is analyzed using nearest neighbor classifiers. It is common to combine multiple categories of features into a single feature vector for the classification. Instead, separate classifiers can be used to classify based on each visual feature individually and the final classification can be obtained based on the combination of separate base classification results. One popular approach is to combine the classifier results into a feature vector and leaving the decision to next level classifier. This method is extended to extract a better information, possibility distribution, from the base classifiers in resolving the conflicts among the classification results. Here, we use fuzzy k Nearest Neighbor (fuzzy k-NN) as base classifier for individual feature sets, the results of which together forms the feature vector for the final k Nearest Neighbor (k-NN) classifier. Testing is done, using different features, individually and in combination, on a database containing 1600 samples of different numerals and the results are compared with the results of different existing methods.Keywords: Fuzzy k Nearest Neighbor, Multiple Classifiers, Numeral Recognition, Structural features.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750582 Non-Burn Treatment of Health Care Risk Waste
Authors: Jefrey Pilusa, Tumisang Seodigeng
Abstract:
This research discusses a South African case study for the potential of utilizing refuse-derived fuel (RDF) obtained from non-burn treatment of health care risk waste (HCRW) as potential feedstock for green energy production. This specific waste stream can be destroyed via non-burn treatment technology involving high-speed mechanical shredding followed by steam or chemical injection to disinfect the final product. The RDF obtained from this process is characterised by a low moisture, low ash, and high calorific value which means it can be potentially used as high-value solid fuel. Due to the raw feed of this RDF being classified as hazardous, the final RDF has been reported to be non-infectious and can blend with other combustible wastes such as rubber and plastic for waste to energy applications. This study evaluated non-burn treatment technology as a possible solution for on-site destruction of HCRW in South African private and public health care centres. Waste generation quantities were estimated based on the number of registered patient beds, theoretical bed occupancy. Time and motion study was conducted to evaluate the logistics viability of on-site treatment. Non-burn treatment technology for HCRW is a promising option for South Africa, and successful implementation of this method depends upon the initial capital investment, operational cost and environmental permitting of such technology; there are other influencing factors such as the size of the waste stream, product off-take price as well as product demand.
Keywords: Autoclave, disposal, fuel, incineration, medical waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1163581 Modal Analysis of Machine Tool Column Using Finite Element Method
Authors: Migbar Assefa
Abstract:
The performance of a machine tool is eventually assessed by its ability to produce a component of the required geometry in minimum time and at small operating cost. It is customary to base the structural design of any machine tool primarily upon the requirements of static rigidity and minimum natural frequency of vibration. The operating properties of machines like cutting speed, feed and depth of cut as well as the size of the work piece also have to be kept in mind by a machine tool structural designer. This paper presents a novel approach to the design of machine tool column for static and dynamic rigidity requirement. Model evaluation is done effectively through use of General Finite Element Analysis software ANSYS. Studies on machine tool column are used to illustrate finite element based concept evaluation technique. This paper also presents results obtained from the computations of thin walled box type columns that are subjected to torsional and bending loads in case of static analysis and also results from modal analysis. The columns analyzed are square and rectangle based tapered open column, column with cover plate, horizontal partitions and with apertures. For the analysis purpose a total of 70 columns were analyzed for bending, torsional and modal analysis. In this study it is observed that the orientation and aspect ratio of apertures have no significant effect on the static and dynamic rigidity of the machine tool structure.
Keywords: Finite Element Modeling, Modal Analysis, Machine tool structure, Static Analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5035580 Identifying the Barriers behind the Lack of Six Sigma Use in Libyan Manufacturing Companies
Authors: Osama Elgadi, Martin Birkett, Wai Ming Cheung
Abstract:
This paper investigates the barriers behind the underutilisation of six sigma in Libyan manufacturing companies (LMCs). A mixed-method methodology is proposed, starting by conducting interviews to collect qualitative data followed by the development of a questionnaire to obtain quantitative data. The focus of this paper is on discussing the findings of the interview stage and how these can be used to further develop the questionnaire stage. The interview results showed that only four key barriers were highlighted as being encountered by LMCs. With a difference in terms of their significance, these factors were identified, and placed in descending order according to their importance, namely: “Lack of top management commitment”, “Lack of training”, “Lack of knowledge about six sigma”, and “Culture effect”. The findings also showed that some barriers which, were found in previous studies of six sigma implementation were not considered as barriers to LMCs but can, in fact, be considered as success factors or enablers for six sigma adoption. These factors were identified as: “sufficiency of time and financial resources”; “customers unsatisfied”; “good communication between all departments in the company”; “we are certain about its results and benefits to our company and unhappy with the current quality system”. These results suggest that LMCs face fewer barriers to adopting six sigma than many well-established global companies operating in other countries and could take advantage of these successful factors by developing and implementing a six sigma framework to improve their product quality and competitiveness.
Keywords: Six sigma, barriers, Libyan manufacturing companies, interview.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756579 Simulation Based VLSI Implementation of Fast Efficient Lossless Image Compression System Using Adjusted Binary Code & Golumb Rice Code
Authors: N. Muthukumaran, R. Ravi
Abstract:
The Simulation based VLSI Implementation of FELICS (Fast Efficient Lossless Image Compression System) Algorithm is proposed to provide the lossless image compression and is implemented in simulation oriented VLSI (Very Large Scale Integrated). To analysis the performance of Lossless image compression and to reduce the image without losing image quality and then implemented in VLSI based FELICS algorithm. In FELICS algorithm, which consists of simplified adjusted binary code for Image compression and these compression image is converted in pixel and then implemented in VLSI domain. This parameter is used to achieve high processing speed and minimize the area and power. The simplified adjusted binary code reduces the number of arithmetic operation and achieved high processing speed. The color difference preprocessing is also proposed to improve coding efficiency with simple arithmetic operation. Although VLSI based FELICS Algorithm provides effective solution for hardware architecture design for regular pipelining data flow parallelism with four stages. With two level parallelisms, consecutive pixels can be classified into even and odd samples and the individual hardware engine is dedicated for each one. This method can be further enhanced by multilevel parallelisms.
Keywords: Image compression, Pixel, Compression Ratio, Adjusted Binary code, Golumb Rice code, High Definition display, VLSI Implementation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2072578 Cascaded ANN for Evaluation of Frequency and Air-gap Voltage of Self-Excited Induction Generator
Authors: Raja Singh Khela, R. K. Bansal, K. S. Sandhu, A. K. Goel
Abstract:
Self-Excited Induction Generator (SEIG) builds up voltage while it enters in its magnetic saturation region. Due to non-linear magnetic characteristics, the performance analysis of SEIG involves cumbersome mathematical computations. The dependence of air-gap voltage on saturated magnetizing reactance can only be established at rated frequency by conducting a laboratory test commonly known as synchronous run test. But, there is no laboratory method to determine saturated magnetizing reactance and air-gap voltage of SEIG at varying speed, terminal capacitance and other loading conditions. For overall analysis of SEIG, prior information of magnetizing reactance, generated frequency and air-gap voltage is essentially required. Thus, analytical methods are the only alternative to determine these variables. Non-existence of direct mathematical relationship of these variables for different terminal conditions has forced the researchers to evolve new computational techniques. Artificial Neural Networks (ANNs) are very useful for solution of such complex problems, as they do not require any a priori information about the system. In this paper, an attempt is made to use cascaded neural networks to first determine the generated frequency and magnetizing reactance with varying terminal conditions and then air-gap voltage of SEIG. The results obtained from the ANN model are used to evaluate the overall performance of SEIG and are found to be in good agreement with experimental results. Hence, it is concluded that analysis of SEIG can be carried out effectively using ANNs.Keywords: Self-Excited Induction Generator, Artificial NeuralNetworks, Exciting Capacitance and Saturated magnetizingreactance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689577 Pilot Trial of Evidence-Based Integrative Group Therapy to Improve Executive Functioning among Adults: Implications for Community Mental Health and Training Clinics
Authors: B. Parchem, M. Watanabe, D. Modrakovic, L. Mathew, A. Franklin, M. Cao, R. E. Broudy
Abstract:
Objective: Executive functioning (EF) deficits underlie several mental health diagnoses including ADHD, anxiety, and depression. Community mental health clinics face extensive waitlists for services with many referrals involving EF deficits. A pilot trial of a four-week group therapy was developed using key components from Cognitive-Behavioral Therapy (CBT), Dialectical Behavior Therapy (DBT), and mindfulness with an aim to improve EF skills and offer low-fee services. Method: Eight adults (M = 34.5) waiting for services at a community clinic were enrolled in a four-week group therapy at an in-house training clinic for doctoral trainees. Baseline EF, pre-/post-intervention ADHD and distress symptoms, group satisfaction, and curriculum helpfulness were assessed. Results: Downward trends in ADHD and distress symptoms pre/post-intervention were not significant. Favorable responses on group satisfaction and helpfulness suggest clinical utility. Conclusion: Preliminary pilot data from a brief group therapy to improve EF may be an efficacious, acceptable, and feasible intervention for adults waiting for services at community mental health and training clinics where there are high demands and limits to services and staffs.Keywords: Executive functioning, cognitive-behavioral therapy, dialectical behavior therapy, mindfulness, adult group therapy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 783576 Fermentation of Germinated Native Black Rice Milk Mixture by Probiotic Lactic Acid Bacteria
Authors: N. Mongkontanawat
Abstract:
This research aimed to demonstrate probiotic germinated native black rice juice fermentation by lactic acid bacteria (Lactobacillus casei TISTR 390). Germinated native black rice juice was inoculated with a 24-h old lactic culture and incubated at 30 °C for 72 hours. Changes in pH, acidity, total soluble solid, and viable cell counts during fermentation under controlled conditions at 0-h, 24-h, 48-h, and 72-h fermentations were evaluated. The study found out that the change in pH and total soluble solid of probiotic germinated black rice juice significantly (p ≤ 0.05) decreased at 72-h fermentation (5.67±0.12 to 2.86±0.04 and 7.00±0.00 to 6.40±0.00 ºbrix at 0-h and 72-h fermentations, respectively). On the other hand, the amount of titratable acidity expressed as lactic acid and the viable cell count significantly (p≤0.05) increased at 72-h fermentation (0.11±0.06 to 0.43±0.06 (% lactic acid) and 3.60 x 106 to 2.75 x 108 CFU/ml at 0-h and 72-h fermentations, respectively). Interestingly, the amount of γ-Amino Butyric Acid (GABA) had a significant difference (p≤0.05) twice as high as that of the control group (0.25±0.01 and 0.13±0.01 mg/100g, respectively). In addition, the free radical scavenging capacity assayed by DPPH method also showed that the IC50 values were significantly (p≤0.05) higher than the control (147.71±0.96 and 202.55±1.24 mg/ml, respectively). After 4 weeks of cold storage at 4 °C, the viable cell counts of lactic acid bacteria reduced to 1.37 x 106 CFU/ml. In conclusion, fermented germinated native black rice juice could be served as a healthy beverage for vegans and people who are allergic to cow milk products.Keywords: Germinated native black rice, probiotic, lactic acid bacteria, Lactobacillus casei.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641575 Designing Social Care Plans Considering Cause-Effect Relationships: A Study in Scotland
Authors: Sotirios N. Raptis
Abstract:
The paper links social needs to social classes by the creation of cohorts of public services matched as causes to other ones as effects using cause-effect (CE) models. It then compares these associations using CE and typical regression methods (LR, ARMA). The paper discusses such public service groupings offered in Scotland in the long term to estimate the risk of multiple causes or effects that can ultimately reduce the healthcare cost by linking the next services to the likely causes of them. The same generic goal can be achieved using LR or ARMA and differences are discussed. The work uses Health and Social Care (H&Sc) public services data from 11 service packs offered by Public Health Services (PHS) Scotland that boil down to 110 single-attribute year series, called ’factors’. The study took place at Macmillan Cancer Support, UK and Abertay University, Dundee, from 2020 to 2023. The paper discusses CE relationships as a main method and compares sample findings with Linear Regression (LR), ARMA, to see how the services are linked. Relationships found were between smoking-related healthcare provision, mental-health-related services, and epidemiological weight in Primary-1-Education Body-Mass-Index (BMI) in children as CE models. Insurance companies and public policymakers can pack CE-linked services in plans such as those for the elderly, low-income people, in the long term. The linkage of services was confirmed allowing more accurate resource planning.
Keywords: Probability, regression, cause-effect cohorts, data frames, services, prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 55574 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX through Fusion of Vision and 3+1D Millimeter Wave Radar
Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma
Abstract:
Unmanned Surface Vehicles (USVs) hold significant value for their capacity to undertake hazardous and labor-intensive operations over aquatic environments. Object detection tasks are significant in these applications. Nonetheless, the efficacy of USVs in object detection is impeded by several intrinsic challenges, including the intricate dispersal of obstacles, reflections emanating from coastal structures, and the presence of fog over water surfaces, among others. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. The MMW radar is a complementary tool to vision sensors, offering reliable environmental data. This approach involves the conversion of the radar’s 3D point cloud into a 2D radar pseudo-image, thereby standardizing the format for radar and vision data by leveraging a point transformer. Furthermore, this paper proposes the development of a multi-source object detection network, named RV-YOLOX, which leverages radar-vision integration specifically tailored for inland waterway environments. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.
Keywords: Inland waterways, object detection, YOLO, sensor fusion, self-attention, deep learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 289573 Optimal Efficiency Control of Pulse Width Modulation - Inverter Fed Motor Pump Drive Using Neural Network
Authors: O. S. Ebrahim, M. A. Badr, A. S. Elgendy, K. O. Shawky, P. K. Jain
Abstract:
This paper demonstrates an improved Loss Model Control (LMC) for a 3-phase induction motor (IM) driving pump load. Compared with other power loss reduction algorithms for IM, the presented one has the advantages of fast and smooth flux adaptation, high accuracy, and versatile implementation. The performance of LMC depends mainly on the accuracy of modeling the motor drive and losses. A loss-model for IM drive that considers the surplus power loss caused by inverter voltage harmonics using closed-form equations and also includes the magnetic saturation has been developed. Further, an Artificial Neural Network (ANN) controller is synthesized and trained offline to determine the optimal flux level that achieves maximum drive efficiency. The drive’s voltage and speed control loops are connecting via the stator frequency to avoid the possibility of excessive magnetization. Besides, the resistance change due to temperature is considered by a first-order thermal model. The obtained thermal information enhances motor protection and control. These together have the potential of making the proposed algorithm reliable. Simulation and experimental studies are performed on 5.5 kW test motor using the proposed control method. The test results are provided and compared with the fixed flux operation to validate the effectiveness.
Keywords: Artificial neural network, ANN, efficiency optimization, induction motor, IM, Pulse Width Modulated, PWM, harmonic losses.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 356572 An Evaluation of Carbon Dioxide Emissions Trading among Enterprises -The Tokyo Cap and Trade Program-
Authors: Hiroki Satou, Kayoko Yamamoto
Abstract:
This study aims to propose three evaluation methods to evaluate the Tokyo Cap and Trade Program when emissions trading is performed virtually among enterprises, focusing on carbon dioxide (CO2), which is the only emitted greenhouse gas that tends to increase. The first method clarifies the optimum reduction rate for the highest cost benefit, the second discusses emissions trading among enterprises through market trading, and the third verifies long-term emissions trading during the term of the plan (2010-2019), checking the validity of emissions trading partly using Geographic Information Systems (GIS). The findings of this study can be summarized in the following three points. 1. Since the total cost benefit is the greatest at a 44% reduction rate, it is possible to set it more highly than that of the Tokyo Cap and Trade Program to get more total cost benefit. 2. At a 44% reduction rate, among 320 enterprises, 8 purchasing enterprises and 245 sales enterprises gain profits from emissions trading, and 67 enterprises perform voluntary reduction without conducting emissions trading. Therefore, to further promote emissions trading, it is necessary to increase the sales volumes of emissions trading in addition to sales enterprises by increasing the number of purchasing enterprises. 3. Compared to short-term emissions trading, there are few enterprises which benefit in each year through the long-term emissions trading of the Tokyo Cap and Trade Program. Only 81 enterprises at the most can gain profits from emissions trading in FY 2019. Therefore, by setting the reduction rate more highly, it is necessary to increase the number of enterprises that participate in emissions trading and benefit from the restraint of CO2 emissions.Keywords: Emissions Trading, Tokyo Cap and Trade Program, Carbon Dioxide (CO2), Global Warming, Geographic Information Systems (GIS)
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2171571 Study on Compressive Strength and Setting Times of Fly Ash Concrete after Slump Recovery Using Superplasticizer
Authors: Chaiyakrit Raoupatham, Ram Hari Dhakal, Chalermchai Wanichlamlert
Abstract:
Fresh concrete has one of dynamic properties known as slump. Slump of concrete is design to compatible with placing method. Due to hydration reaction of cement, the slump of concrete is loss through time. Therefore, delayed concrete probably get reject because slump is unacceptable. In order to recover the slump of delayed concrete the second dose of superplasticizer (naphthalene based type F) is added into the system, the slump recovery can be done as long as the concrete is not setting. By adding superplasticizer as solution for recover unusable slump loss concrete may affects other concrete properties. Therefore, this paper was observed setting times and compressive strength of concrete after being re-dose with chemical admixture type F (superplasticizer, naphthalene based) for slump recovery. The concrete used in this study was fly ash concrete with fly ash replacement of 0%, 30% and 50% respectively. Concrete mix designed for test specimen was prepared with paste content (ratio of volume of cement to volume of void in the aggregate) of 1.2 and 1.3, water-to-binder ratio (w/b) range of 0.3 to 0.58, initial dose of superplasticizer (SP) range from 0.5 to 1.6%. The setting times of concrete were tested both before and after re-dosed with different amount of second dose and time of dosing. The research was concluded that addition of second dose of superplasticizer would increase both initial and final setting times accordingly to dosage of addition. As for fly ash concrete, the prolongation effect was higher as the replacement of fly ash increase. The prolongation effect can reach up to maximum about 4 hours. In case of compressive strength, the re-dosed concrete has strength fluctuation within acceptable range of ±10%.Keywords: Compressive strength, Fly ash concrete, Second dose of superplasticizer, Slump recovery, Setting times.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1931570 Packet Forwarding with Multiprotocol Label Switching
Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar
Abstract:
MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2693569 Simulating Dynamics of Thoracolumbar Spine Derived from Life MOD under Haptic Forces
Authors: K. T. Huynh, I. Gibson, W. F. Lu, B. N. Jagdish
Abstract:
In this paper, the construction of a detailed spine model is presented using the LifeMOD Biomechanics Modeler. The detailed spine model is obtained by refining spine segments in cervical, thoracic and lumbar regions into individual vertebra segments, using bushing elements representing the intervertebral discs, and building various ligamentous soft tissues between vertebrae. In the sagittal plane of the spine, constant force will be applied from the posterior to anterior during simulation to determine dynamic characteristics of the spine. The force magnitude is gradually increased in subsequent simulations. Based on these recorded dynamic properties, graphs of displacement-force relationships will be established in terms of polynomial functions by using the least-squares method and imported into a haptic integrated graphic environment. A thoracolumbar spine model with complex geometry of vertebrae, which is digitized from a resin spine prototype, will be utilized in this environment. By using the haptic technique, surgeons can touch as well as apply forces to the spine model through haptic devices to observe the locomotion of the spine which is computed from the displacement-force relationship graphs. This current study provides a preliminary picture of our ongoing work towards building and simulating bio-fidelity scoliotic spine models in a haptic integrated graphic environment whose dynamic properties are obtained from LifeMOD. These models can be helpful for surgeons to examine kinematic behaviors of scoliotic spines and to propose possible surgical plans before spine correction operations.Keywords: Haptic interface, LifeMOD, spine modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904568 User Pattern Learning Algorithm based MDSS(Medical Decision Support System) Framework under Ubiquitous
Authors: Insung Jung, Gi-Nam Wang
Abstract:
In this paper, we present user pattern learning algorithm based MDSS (Medical Decision support system) under ubiquitous. Most of researches are focus on hardware system, hospital management and whole concept of ubiquitous environment even though it is hard to implement. Our objective of this paper is to design a MDSS framework. It helps to patient for medical treatment and prevention of the high risk patient (COPD, heart disease, Diabetes). This framework consist database, CAD (Computer Aided diagnosis support system) and CAP (computer aided user vital sign prediction system). It can be applied to develop user pattern learning algorithm based MDSS for homecare and silver town service. Especially this CAD has wise decision making competency. It compares current vital sign with user-s normal condition pattern data. In addition, the CAP computes user vital sign prediction using past data of the patient. The novel approach is using neural network method, wireless vital sign acquisition devices and personal computer DB system. An intelligent agent based MDSS will help elder people and high risk patients to prevent sudden death and disease, the physician to get the online access to patients- data, the plan of medication service priority (e.g. emergency case).Keywords: Neural network, U-healthcare, MDSS, CAP, DSS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1836567 Analysis of the Fire Hazard Posed by Petrol Stations in Stellenbosch and the Degree of Risk Acknowledgement in Land-Use Planning
Authors: K. Qonono
Abstract:
Despite the significance and economic benefits of petrol stations in South Africa, these still pose a huge risk of fire and explosion threatening public safety. This research paper examines the extent to which land-use planning in Stellenbosch, South Africa, considers the fire risk posed by petrol stations and the implications for public safety as well as preparedness for large fires or explosions. To achieve this, the research identified the land-use types around petrol stations in Stellenbosch and determined the extent to which their locations comply with the local, national, and international land-use planning regulations. A mixed research method consisting of the collection and analysis of geospatial data and qualitative data was applied, where petrol stations within a six-kilometre radius of Stellenbosch’s town centre were utilised as study sites. The research examined the risk of fires/explosions at these petrol stations. The research investigated Stellenbosch Municipality’s institutional preparedness to respond in the event of a fire/explosion at these petrol stations. The research observed that siting of petrol stations does not comply with local, national, and international good practices, thus exposing the surrounding developments to fires and explosions. Land-use planning practice does not consider hazards created by petrol stations. Despite the potential for major fires at petrol stations, Stellenbosch Municipality’s level of preparedness to respond to petrol station fires appears low due to the prioritisation of more frequent events.
Keywords: Petrol stations, technological hazard, DRR, land-use planning, risk analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 138566 Modified Energy and Link Failure Recovery Routing Algorithm for Wireless Sensor Network
Authors: M. Jayekumar, V. Nagarajan
Abstract:
Wireless sensor network finds role in environmental monitoring, industrial applications, surveillance applications, health monitoring and other supervisory applications. Sensing devices form the basic operational unit of the network that is self-battery powered with limited life time. Sensor node spends its limited energy for transmission, reception, routing and sensing information. Frequent energy utilization for the above mentioned process leads to network lifetime degradation. To enhance energy efficiency and network lifetime, we propose a modified energy optimization and node recovery post failure method, Energy-Link Failure Recovery Routing (E-LFRR) algorithm. In our E-LFRR algorithm, two phases namely, Monitored Transmission phase and Replaced Transmission phase are devised to combat worst case link failure conditions. In Monitored Transmission phase, the Actuator Node monitors and identifies suitable nodes for shortest path transmission. The Replaced Transmission phase dispatches the energy draining node at early stage from the active link and replaces it with the new node that has sufficient energy. Simulation results illustrate that this combined methodology reduces overhead, energy consumption, delay and maintains considerable amount of alive nodes thereby enhancing the network performance.
Keywords: Actuator node, energy efficient routing, energy hole, link failure recovery, link utilization, wireless sensor network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1191565 Localizing and Recognizing Integral Pitches of Cheque Document Images
Authors: Bremananth R., Veerabadran C. S., Andy W. H. Khong
Abstract:
Automatic reading of handwritten cheque is a computationally complex process and it plays an important role in financial risk management. Machine vision and learning provide a viable solution to this problem. Research effort has mostly been focused on recognizing diverse pitches of cheques and demand drafts with an identical outline. However most of these methods employ templatematching to localize the pitches and such schemes could potentially fail when applied to different types of outline maintained by the bank. In this paper, the so-called outline problem is resolved by a cheque information tree (CIT), which generalizes the localizing method to extract active-region-of-entities. In addition, the weight based density plot (WBDP) is performed to isolate text entities and read complete pitches. Recognition is based on texture features using neural classifiers. Legal amount is subsequently recognized by both texture and perceptual features. A post-processing phase is invoked to detect the incorrect readings by Type-2 grammar using the Turing machine. The performance of the proposed system was evaluated using cheque and demand drafts of 22 different banks. The test data consists of a collection of 1540 leafs obtained from 10 different account holders from each bank. Results show that this approach can easily be deployed without significant design amendments.Keywords: Cheque reading, Connectivity checking, Text localization, Texture analysis, Turing machine, Signature verification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656564 Waste-Based Surface Modification to Enhance Corrosion Resistance of Aluminium Bronze Alloy
Authors: Wilson Handoko, Farshid Pahlevani, Isha Singla, Himanish Kumar, Veena Sahajwalla
Abstract:
Aluminium bronze alloys are well known for their superior abrasion, tensile strength and non-magnetic properties, due to the co-presence of iron (Fe) and aluminium (Al) as alloying elements and have been commonly used in many industrial applications. However, continuous exposure to the marine environment will accelerate the risk of a tendency to Al bronze alloys parts failures. Although a higher level of corrosion resistance properties can be achieved by modifying its elemental composition, it will come at a price through the complex manufacturing process and increases the risk of reducing the ductility of Al bronze alloy. In this research, the use of ironmaking slag and waste plastic as the input source for surface modification of Al bronze alloy was implemented. Microstructural analysis conducted using polarised light microscopy and scanning electron microscopy (SEM) that is equipped with energy dispersive spectroscopy (EDS). An electrochemical corrosion test was carried out through Tafel polarisation method and calculation of protection efficiency against the base-material was determined. Results have indicated that uniform modified surface which is as the result of selective diffusion process, has enhanced corrosion resistance properties up to 12.67%. This approach has opened a new opportunity to access various industrial utilisations in commercial scale through minimising the dependency on natural resources by transforming waste sources into the protective coating in environmentally friendly and cost-effective ways.
Keywords: Aluminium bronze, waste-based surface modification, Tafel polarisation, corrosion resistance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1052563 Molecular Identification of ESBL Genesbla GES-1, blaVEB-1, blaCTX-M blaOXA-1, blaOXA-4,blaOXA-10 and blaPER-1 in Pseudomonas aeruginosa Strains Isolated from Burn Patientsby PCR, RFLP and Sequencing Techniques
Authors: Fereshteh Shacheraghi, Mohammad Reza Shakibaie, Hanieh Noveiri
Abstract:
Fourty one strains of ESBL producing P.aeruginosa which were previously isolated from burn patients in Kerman University general hospital, Iran were subjected to PCR, RFLP and sequencing in order to determine the type of extended spectrum β- lactamases (ESBL), the restriction digestion pattern and possibility of mutation among detected genes. DNA extraction was carried out by phenol chloroform method. PCR for detection of bla genes was performed using specific primer for each gene. Restriction Fragment Length Polymorphism (RFLP) for ESBL genes was carried out using EcoRI, NheI, PVUII, EcoRV, DdeI, and PstI restriction enzymes. The PCR products were subjected to direct sequencing of both the strands for identification of the ESBL genes.The blaCTX-M, blaVEB-1, blaPER-1, blaGES-1, blaOXA-1, blaOXA-4 and blaOXA-10 genes were detected in the (n=1) 2.43%, (n=41)100%, (n=28) 68.3%, (n=10) 24.4%, (n=29) 70.7%, (n=7)17.1% and (n=38) 92.7% of the ESBL producing isolates respectively. The RFLP analysis showed that each ESBL gene has identical pattern of digestion among the isolated strains. Sequencing of the ESBL genes confirmed the genuinety of PCR products and revealed no mutation in the restriction sites of the above genes. From results of the present investigation it can be concluded that blaVEB-1 and blaCTX-M were the most and the least frequently isolated ESBL genes among the P.aeruginosa strains isolated from burn patients. The RFLP and sequencing analysis revealed that same clone of the bla genes were indeed existed among the antibiotic resistant strains.Keywords: ESBL genes, PCR, RFLP, Sequencing, P.aeruginosa
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2973562 Using Artificial Neural Network to Forecast Groundwater Depth in Union County Well
Authors: Zahra Ghadampour, Gholamreza Rakhshandehroo
Abstract:
A concern that researchers usually face in different applications of Artificial Neural Network (ANN) is determination of the size of effective domain in time series. In this paper, trial and error method was used on groundwater depth time series to determine the size of effective domain in the series in an observation well in Union County, New Jersey, U.S. different domains of 20, 40, 60, 80, 100, and 120 preceding day were examined and the 80 days was considered as effective length of the domain. Data sets in different domains were fed to a Feed Forward Back Propagation ANN with one hidden layer and the groundwater depths were forecasted. Root Mean Square Error (RMSE) and the correlation factor (R2) of estimated and observed groundwater depths for all domains were determined. In general, groundwater depth forecast improved, as evidenced by lower RMSEs and higher R2s, when the domain length increased from 20 to 120. However, 80 days was selected as the effective domain because the improvement was less than 1% beyond that. Forecasted ground water depths utilizing measured daily data (set #1) and data averaged over the effective domain (set #2) were compared. It was postulated that more accurate nature of measured daily data was the reason for a better forecast with lower RMSE (0.1027 m compared to 0.255 m) in set #1. However, the size of input data in this set was 80 times the size of input data in set #2; a factor that may increase the computational effort unpredictably. It was concluded that 80 daily data may be successfully utilized to lower the size of input data sets considerably, while maintaining the effective information in the data set.Keywords: Neural networks, groundwater depth, forecast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2514561 Rheological and Computational Analysis of Crude Oil Transportation
Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh
Abstract:
Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.
Keywords: Natural surfactant, crude oil, rheology, CFD, viscosity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672560 Modeling Non-Darcy Natural Convection Flow of a Micropolar Dusty Fluid with Convective Boundary Condition
Authors: F. M. Hady, A. Mahdy, R. A. Mohamed, Omima A. Abo Zaid
Abstract:
A numerical approach of the effectiveness of numerous parameters on magnetohydrodynamic (MHD) natural convection heat and mass transfer problem of a dusty micropolar fluid in a non-Darcy porous regime is prepared in the current paper. In addition, a convective boundary condition is scrutinized into the micropolar dusty fluid model. The governing boundary layer equations are converted utilizing similarity transformations to a system of dimensionless equations to be convenient for numerical treatment. The resulting equations for fluid phase and dust phases of momentum, angular momentum, energy, and concentration with the appropriate boundary conditions are solved numerically applying the Runge-Kutta method of fourth-order. In accordance with the numerical study, it is obtained that the magnitude of the velocity of both fluid phase and particle phase reduces with an increasing magnetic parameter, the mass concentration of the dust particles, and Forchheimer number. While rises due to an increment in convective parameter and Darcy number. Also, the results refer that high values of the magnetic parameter, convective parameter, and Forchheimer number support the temperature distributions. However, deterioration occurs as the mass concentration of the dust particles and Darcy number increases. The angular velocity behavior is described by progress when studying the effect of the magnetic parameter and microrotation parameter.Keywords: Micropolar dusty fluid, convective heating, natural convection, MHD, porous media.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939