Search results for: reduce
452 GCM Based Fuzzy Clustering to Identify Homogeneous Climatic Regions of North-East India
Authors: Arup K. Sarma, Jayshree Hazarika
Abstract:
The North-eastern part of India, which receives heavier rainfall than other parts of the subcontinent, is of great concern now-a-days with regard to climate change. High intensity rainfall for short duration and longer dry spell, occurring due to impact of climate change, affects river morphology too. In the present study, an attempt is made to delineate the North-eastern region of India into some homogeneous clusters based on the Fuzzy Clustering concept and to compare the resulting clusters obtained by using conventional methods and nonconventional methods of clustering. The concept of clustering is adapted in view of the fact that, impact of climate change can be studied in a homogeneous region without much variation, which can be helpful in studies related to water resources planning and management. 10 IMD (Indian Meteorological Department) stations, situated in various regions of the North-east, have been selected for making the clusters. The results of the Fuzzy C-Means (FCM) analysis show different clustering patterns for different conditions. From the analysis and comparison it can be concluded that nonconventional method of using GCM data is somehow giving better results than the others. However, further analysis can be done by taking daily data instead of monthly means to reduce the effect of standardization.
Keywords: Climate change, conventional and nonconventional methods of clustering, FCM analysis, homogeneous regions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2211451 Can Exams Be Shortened? Using a New Empirical Approach to Test in Finance Courses
Authors: Eric S. Lee, Connie Bygrave, Jordan Mahar, Naina Garg, Suzanne Cottreau
Abstract:
Marking exams is universally detested by lecturers. Final exams in many higher education courses often last 3.0 hrs. Do exams really need to be so long? Can we justifiably reduce the number of questions on them? Surprisingly few have researched these questions, arguably because of the complexity and difficulty of using traditional methods. To answer these questions empirically, we used a new approach based on three key elements: Use of an unusual variation of a true experimental design, equivalence hypothesis testing, and an expanded set of six psychometric criteria to be met by any shortened exam if it is to replace a current 3.0-hr exam (reliability, validity, justifiability, number of exam questions, correspondence, and equivalence). We compared student performance on each official 3.0-hr exam with that on five shortened exams having proportionately fewer questions (2.5, 2.0, 1.5, 1.0, and 0.5 hours) in a series of four experiments conducted in two classes in each of two finance courses (224 students in total). We found strong evidence that, in these courses, shortening of final exams to 2.0 hrs was warranted on all six psychometric criteria. Shortening these exams by one hour should result in a substantial one-third reduction in lecturer time and effort spent marking, lower student stress, and more time for students to prepare for other exams. Our approach provides a relatively simple, easy-to-use methodology that lecturers can use to examine the effect of shortening their own exams.
Keywords: Exam length, psychometric criteria, synthetic experimental designs, test length.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503450 A Comparison of Experimental Data with Monte Carlo Calculations for Optimisation of the Sourceto- Detector Distance in Determining the Efficiency of a LaBr3:Ce (5%) Detector
Authors: H. Aldousari, T. Buchacher, N. M. Spyrou
Abstract:
Cerium-doped lanthanum bromide LaBr3:Ce(5%) crystals are considered to be one of the most advanced scintillator materials used in PET scanning, combining a high light yield, fast decay time and excellent energy resolution. Apart from the correct choice of scintillator, it is also important to optimise the detector geometry, not least in terms of source-to-detector distance in order to obtain reliable measurements and efficiency. In this study a commercially available 25 mm x 25 mm BrilLanCeTM 380 LaBr3: Ce (5%) detector was characterised in terms of its efficiency at varying source-to-detector distances. Gamma-ray spectra of 22Na, 60Co, and 137Cs were separately acquired at distances of 5, 10, 15, and 20cm. As a result of the change in solid angle subtended by the detector, the geometric efficiency reduced in efficiency with increasing distance. High efficiencies at low distances can cause pulse pile-up when subsequent photons are detected before previously detected events have decayed. To reduce this systematic error the source-to-detector distance should be balanced between efficiency and pulse pile-up suppression as otherwise pile-up corrections would need to be necessary at short distances. In addition to the experimental measurements Monte Carlo simulations have been carried out for the same setup, allowing a comparison of results. The advantages and disadvantages of each approach have been highlighted.
Keywords: BrilLanCeTM380 LaBr3:Ce(5%), Coincidence summing, GATE simulation, Geometric efficiency
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891449 Broadband PowerLine Communications: Performance Analysis
Authors: Justinian Anatory, Nelson Theethayi, M. M. Kissaka, N. H. Mvungi
Abstract:
Power line channel is proposed as an alternative for broadband data transmission especially in developing countries like Tanzania [1]. However the channel is affected by stochastic attenuation and deep notches which can lead to the limitation of channel capacity and achievable data rate. Various studies have characterized the channel without giving exactly the maximum performance and limitation in data transfer rate may be this is due to complexity of channel modeling being used. In this paper the channel performance of medium voltage, low voltage and indoor power line channel is presented. In the investigations orthogonal frequency division multiplexing (OFDM) with phase shift keying (PSK) as carrier modulation schemes is considered, for indoor, medium and low voltage channels with typical ten branches and also Golay coding is applied for medium voltage channel. From channels, frequency response deep notches are observed in various frequencies which can lead to reduce the achievable data rate. However, is observed that data rate up to 240Mbps is realized for a signal to noise ratio of about 50dB for indoor and low voltage channels, however for medium voltage a typical link with ten branches is affected by strong multipath and coding is required for feasible broadband data transfer.
Keywords: Powerline Communications, branched network, channel model, modulation, channel performance, OFDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1833448 Seamless Multicast Handover in Fmipv6-Based Networks
Authors: Moneeb Gohar, Seok Joo Koh, Tae-Won Um, Hyun-Woo Lee
Abstract:
This paper proposes a fast tree join scheme to provide seamless multicast handover in the mobile networks based on the Fast Mobile IPv6 (FMIPv6). In the existing FMIPv6-based multicast handover scheme, the bi-directional tunnelling or the remote subscription is employed with the packet forwarding from the previous access router (AR) to the new AR. In general, the remote subscription approach is preferred to the bi-directional tunnelling one, since in the remote subscription scheme we can exploit an optimized multicast path from a multicast source to many mobile receivers. However, in the remote subscription scheme, if the tree joining operation takes a long time, the amount of data packets to be forwarded and buffered for multicast handover will increase, and thus the corresponding buffer may overflow, which results in severe packet losses. In order to reduce these costs associated with packet forwarding and buffering, this paper proposes the fast join to multicast tree, in which the new AR will join the multicast tree as fast as possible, so that the new multicast data packets can also arrive at the new AR, by which the packet forwarding and buffering costs can be reduced. From numerical analysis, it is shown that the proposed scheme can give better performance than the existing FMIPv6-based multicast handover schemes in terms of the multicast packet delivery costs.Keywords: Mobile Multicast, FMIPv6, Seamless Handover, Fast Tree Join.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1426447 Image Classification and Accuracy Assessment Using the Confusion Matrix, Contingency Matrix, and Kappa Coefficient
Authors: F. F. Howard, C. B. Boye, I. Yakubu, J. S. Y. Kuma
Abstract:
One of the ways that could be used for the production of land use and land cover maps by a procedure known as image classification is the use of the remote sensing technique. Numerous elements ought to be taken into consideration, including the availability of highly satisfactory Landsat imagery, secondary data and a precise classification process. The goal of this study was to classify and map the land use and land cover of the study area using remote sensing and Geospatial Information System (GIS) analysis. The classification was done using Landsat 8 satellite images acquired in December 2020 covering the study area. The Landsat image was downloaded from the USGS. The Landsat image with 30 m resolution was geo-referenced to the WGS_84 datum and Universal Transverse Mercator (UTM) Zone 30N coordinate projection system. A radiometric correction was applied to the image to reduce the noise in the image. This study consists of two sections: the Land Use/Land Cover (LULC) and Accuracy Assessments using the confusion and contingency matrix and the Kappa coefficient. The LULC classifications were vegetation (agriculture) (67.87%), water bodies (0.01%), mining areas (5.24%), forest (26.02%), and settlement (0.88%). The overall accuracy of 97.87% and the kappa coefficient (K) of 97.3% were obtained for the confusion matrix. While an overall accuracy of 95.7% and a Kappa coefficient of 0.947 were obtained for the contingency matrix, the kappa coefficients were rated as substantial; hence, the classified image is fit for further research.
Keywords: Confusion Matrix, contingency matrix, kappa coefficient, land used/ land cover, accuracy assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252446 LOD Exploitation and Fast Silhouette Detection for Shadow Volumes
Authors: Mustafa S. Fawad, Wang Wencheng, Wu Enhua
Abstract:
Shadows add great amount of realism to a scene and many algorithms exists to generate shadows. Recently, Shadow volumes (SVs) have made great achievements to place a valuable position in the gaming industries. Looking at this, we concentrate on simple but valuable initial partial steps for further optimization in SV generation, i.e.; model simplification and silhouette edge detection and tracking. Shadow volumes (SVs) usually takes time in generating boundary silhouettes of the object and if the object is complex then the generation of edges become much harder and slower in process. The challenge gets stiffer when real time shadow generation and rendering is demanded. We investigated a way to use the real time silhouette edge detection method, which takes the advantage of spatial and temporal coherence, and exploit the level-of-details (LOD) technique for reducing silhouette edges of the model to use the simplified version of the model for shadow generation speeding up the running time. These steps highly reduce the execution time of shadow volume generations in real-time and are easily flexible to any of the recently proposed SV techniques. Our main focus is to exploit the LOD and silhouette edge detection technique, adopting them to further enhance the shadow volume generations for real time rendering.Keywords: LOD, perception, Shadow Volumes, SilhouetteEdge, Spatial and Temporal coherence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613445 Investigation of Different Stimulation Patterns to Reduce Muscle Fatigue during Functional Electrical Stimulation
Abstract:
Functional electrical stimulation (FES) is a commonly used technique in rehabilitation and often associated with rapid muscle fatigue which becomes the limiting factor in its applications. The objective of this study is to investigate the effects on the onset of fatigue of conventional synchronous stimulation, as well as asynchronous stimulation that mimic voluntary muscle activation targeting different motor units which are activated sequentially or randomly via multiple pairs of stimulation electrodes. We investigate three different approaches with various electrode configurations, as well as different patterns of stimulation applied to the gastrocnemius muscle: Conventional Synchronous Stimulation (CSS), Asynchronous Sequential Stimulation (ASS) and Asynchronous Random Stimulation (ARS). Stimulation was applied repeatedly for 300 ms followed by 700 ms of no-stimulation with 40 Hz effective frequency for all protocols. Ten able-bodied volunteers (28±3 years old) participated in this study. As fatigue indicators, we focused on the analysis of Normalized Fatigue Index (NFI), Fatigue Time Interval (FTI) and pre-post Twitch-Tetanus Ratio (ΔTTR). The results demonstrated that ASS and ARS give higher NFI and longer FTI confirming less fatigue for asynchronous stimulation. In addition, ASS and ARS resulted in higher ΔTTR than conventional CSS. In this study, we proposed a randomly distributed stimulation method for the application of FES and investigated its suitability for reducing muscle fatigue compared to previously applied methods. The results validated that asynchronous stimulation reduces fatigue, and indicates that random stimulation may improve fatigue resistance in some conditions.
Keywords: Asynchronous stimulation, electrode configuration, functional electrical stimulation, muscle fatigue, pattern stimulation, random stimulation, sequential stimulation, synchronous stimulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1245444 Fuzzy Wavelet Packet based Feature Extraction Method for Multifunction Myoelectric Control
Authors: Rami N. Khushaba, Adel Al-Jumaily
Abstract:
The myoelectric signal (MES) is one of the Biosignals utilized in helping humans to control equipments. Recent approaches in MES classification to control prosthetic devices employing pattern recognition techniques revealed two problems, first, the classification performance of the system starts degrading when the number of motion classes to be classified increases, second, in order to solve the first problem, additional complicated methods were utilized which increase the computational cost of a multifunction myoelectric control system. In an effort to solve these problems and to achieve a feasible design for real time implementation with high overall accuracy, this paper presents a new method for feature extraction in MES recognition systems. The method works by extracting features using Wavelet Packet Transform (WPT) applied on the MES from multiple channels, and then employs Fuzzy c-means (FCM) algorithm to generate a measure that judges on features suitability for classification. Finally, Principle Component Analysis (PCA) is utilized to reduce the size of the data before computing the classification accuracy with a multilayer perceptron neural network. The proposed system produces powerful classification results (99% accuracy) by using only a small portion of the original feature set.Keywords: Biomedical Signal Processing, Data mining andInformation Extraction, Machine Learning, Rehabilitation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1737443 Multivariable Control of Smart Timoshenko Beam Structures Using POF Technique
Authors: T.C. Manjunath, B. Bandyopadhyay
Abstract:
Active Vibration Control (AVC) is an important problem in structures. One of the ways to tackle this problem is to make the structure smart, adaptive and self-controlling. The objective of active vibration control is to reduce the vibration of a system by automatic modification of the system-s structural response. This paper features the modeling and design of a Periodic Output Feedback (POF) control technique for the active vibration control of a flexible Timoshenko cantilever beam for a multivariable case with 2 inputs and 2 outputs by retaining the first 2 dominant vibratory modes using the smart structure concept. The entire structure is modeled in state space form using the concept of piezoelectric theory, Timoshenko beam theory, Finite Element Method (FEM) and the state space techniques. Simulations are performed in MATLAB. The effect of placing the sensor / actuator at 2 finite element locations along the length of the beam is observed. The open loop responses, closed loop responses and the tip displacements with and without the controller are obtained and the performance of the smart system is evaluated for active vibration control.Keywords: Smart structure, Timoshenko theory, Euler-Bernoulli theory, Periodic output feedback control, Finite Element Method, State space model, Vibration control, Multivariable system, Linear Matrix Inequality
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2319442 Compressed Sensing of Fetal Electrocardiogram Signals Based on Joint Block Multi-Orthogonal Least Squares Algorithm
Authors: Xiang Jianhong, Wang Cong, Wang Linyu
Abstract:
With the rise of medical IoT technologies, Wireless body area networks (WBANs) can collect fetal electrocardiogram (FECG) signals to support telemedicine analysis. The compressed sensing (CS)-based WBANs system can avoid the sampling of a large amount of redundant information and reduce the complexity and computing time of data processing, but the existing algorithms have poor signal compression and reconstruction performance. In this paper, a Joint block multi-orthogonal least squares (JBMOLS) algorithm is proposed. We apply the FECG signal to the Joint block sparse model (JBSM), and a comparative study of sparse transformation and measurement matrices is carried out. A FECG signal compression transmission mode based on Rbio5.5 wavelet, Bernoulli measurement matrix, and JBMOLS algorithm is proposed to improve the compression and reconstruction performance of FECG signal by CS-based WBANs. Experimental results show that the compression ratio (CR) required for accurate reconstruction of this transmission mode is increased by nearly 10%, and the runtime is saved by about 30%.
Keywords: telemedicine, fetal electrocardiogram, compressed sensing, joint sparse reconstruction, block sparse signal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 510441 A New Fast Skin Color Detection Technique
Authors: Tarek M. Mahmoud
Abstract:
Skin color can provide a useful and robust cue for human-related image analysis, such as face detection, pornographic image filtering, hand detection and tracking, people retrieval in databases and Internet, etc. The major problem of such kinds of skin color detection algorithms is that it is time consuming and hence cannot be applied to a real time system. To overcome this problem, we introduce a new fast technique for skin detection which can be applied in a real time system. In this technique, instead of testing each image pixel to label it as skin or non-skin (as in classic techniques), we skip a set of pixels. The reason of the skipping process is the high probability that neighbors of the skin color pixels are also skin pixels, especially in adult images and vise versa. The proposed method can rapidly detect skin and non-skin color pixels, which in turn dramatically reduce the CPU time required for the protection process. Since many fast detection techniques are based on image resizing, we apply our proposed pixel skipping technique with image resizing to obtain better results. The performance evaluation of the proposed skipping and hybrid techniques in terms of the measured CPU time is presented. Experimental results demonstrate that the proposed methods achieve better result than the relevant classic method.Keywords: Adult images filtering, image resizing, skin color detection, YcbCr color space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4004440 A Study on Improving the Flow Capacity of the Valves
Authors: A. G. Pradeep, Gorantla Giridhar Kumar, Vijay Turaga, Vinod Srinivasa
Abstract:
The major problem in the flow control valve is of lower Flow Capacity (Cv) which will reduce overall efficiency of flow circuit. Designers are continuously working to improve the Cv of the valve, but they need to validate the design ideas they have regarding the improvement of Cv. Traditional method of prototype and testing take a lot of time, that is where CFD comes into picture with very quick and accurate validation along with the visualization which is not possible with traditional testing method. We have developed a method to predict Cv value using CFD analysis by iterating on various Boundary conditions, solver settings and by carrying out grid convergence studies to establish correlation between the CFD model and Test data. The present study investigates 3 different ideas put forward by the designers for improving the flow capacity of the valves like reducing the cage thickness, changing the port position, and using the parabolic plug to guide the flow. Using CFD, we analyzed all design changes using the established methodology that we developed. We were able to evaluate the effect of these design changes on the Valve Cv. We optimized the wetted surface of the valve further by suggesting the design modification to the lower part of the valve to make the flow more streamlined. We could find that changing cage thickness and port position has little impact on the valve Cv. Combination of optimized wetted surface and introduction of parabolic plug improved the Cv of the valve significantly.
Keywords: Flow control valves, flow capacity, CFD simulations, design validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 438439 Locating Cultural Centers in Shiraz (Iran) Applying Geographic Information System (GIS)
Authors: R. Mokhtari Malekabadi, S. Ghaed Rahmati, S. Aram
Abstract:
Optimal cultural site selection is one of the ways that can lead to the promotion of citizenship culture in addition to ensuring the health and leisure of city residents. This study examines the social and cultural needs of the community and optimal cultural site allocation and after identifying the problems and shortcomings, provides a suitable model for finding the best location for these centers where there is the greatest impact on the promotion of citizenship culture. On the other hand, non-scientific methods cause irreversible impacts to the urban environment and citizens. But modern efficient methods can reduce these impacts. One of these methods is using geographical information systems (GIS). In this study, Analytical Hierarchy Process (AHP) method was used to locate the optimal cultural site. In AHP, three principles (decomposition), (comparative analysis), and (combining preferences) are used. The objectives of this research include providing optimal contexts for passing time and performing cultural activities by Shiraz residents and also proposing construction of some cultural sites in different areas of the city. The results of this study show the correct positioning of cultural sites based on social needs of citizens. Thus, considering the population parameters and radii access, GIS and AHP model for locating cultural centers can meet social needs of citizens.Keywords: Analytical Hierarchy Process (AHP), geographical information systems (GIS), Cultural site, locating, Shiraz.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1600438 Effect of Ambient Oxygen Content and Lifting Frequency on the Participant’s Lifting Capabilities, Muscle Activities, and Perceived Exertion
Authors: Atef M. Ghaleb, Mohamed Z. Ramadan, Khalid Saad Aljaloud
Abstract:
The aim of this study is to assesses the lifting capabilities of persons experiencing hypoxia. It also examines the behavior of the physiological response induced through the lifting process related to changing in the hypoxia and lifting frequency variables. For this purpose, the study performed two consecutive tests by using; (1) training and acclimatization; and (2) an actual collection of data. A total of 10 male students from King Saud University, Kingdom of Saudi Arabia, were recruited in the study. A two-way repeated measures design, with two independent variables (ambient oxygen (15%, 18% and 21%)) and lifting frequency (1 lift/min and 4 lifts/min) and four dependent variables i.e., maximum acceptable weight of lift (MAWL), Electromyography (EMG) of four muscle groups (anterior deltoid, trapezius, biceps brachii, and erector spinae), rating of perceived exertion (RPE), and rating of oxygen feeling (ROF) were used in this study. The results show that lifting frequency has significantly impacted the MAWL and muscles’ activities. The oxygen content had a significant effect on the RPE and ROE. The study has revealed that acclimatization and training sessions significantly reduce the effect of the hypoxia on the human physiological parameters during the manual materials handling tasks.
Keywords: Lifting capabilities, muscle activities (sEMG), oxygen content, perceived exertion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 644437 A Multiple-Objective Environmental Rationalization and Optimization for Material Substitution in the Production of Stone-Washed Jeans- Garments
Authors: Nabil A. Ibrahim, Nabil M. Abdel Moneim, Mohamed A. Ramadan, Marwa M. Hosni
Abstract:
As the Textile Industry is the second largest industry in Egypt and as small and medium-sized enterprises (SMEs) make up a great portion of this industry therein it is essential to apply the concept of Cleaner Production for the purpose of reducing pollution. In order to achieve this goal, a case study concerned with ecofriendly stone-washing of jeans-garments was investigated. A raw material-substitution option was adopted whereby the toxic potassium permanganate and sodium sulfide were replaced by the environmentally compatible hydrogen peroxide and glucose respectively where the concentrations of both replaced chemicals together with the operating time were optimized. In addition, a process-rationalization option involving four additional processes was investigated. By means of criteria such as product quality, effluent analysis, mass and heat balance; and cost analysis with the aid of a statistical model, a process optimization treatment revealed that the superior process optima were 50%, 0.15% and 50min for H2O2 concentration, glucose concentration and time, respectively. With these values the superior process ought to reduce the annual cost by about EGP 105 relative to the currently used conventional method.Keywords: Cleaner Production, Eco-friendly of jeans garments, Stone washing, Textile Industry, Textile Wet Processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073436 Potential of Safflower (Carthamus tinctorius L.) for Phytoremedation of Soils Contaminated with Heavy Metals
Authors: Violina R. Angelova, Vanja I. Akova, Stefan V. Krustev, Krasimir I. Ivanov
Abstract:
A field study was conducted to evaluate the efficacy of safflower plant for phytoremediation of contaminated soils. The experiment was performed on an agricultural fields contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. Field experiments with randomized complete block design with five treatments (control, compost amendments added at 20 and 40 t/daa, and vermicompost amendments added at 20 and 40 t/daa) were carried out. The quality of safflower seeds and oil (heavy metals and fatty acid composition) were determined. Tested organic amendments significantly influenced the chemical composition of safflower seeds and oil. The compost and vermicompost treatments significantly reduced heavy metals concentration in safflower seeds and oils, but the effect differed among them. Addition of vermicompost and compost leads to an increase in the content of palmitic acid and linoleic acid, and a decrease in the stearic and oleic acids compared with the control. A significant increase in the quantity of saturated acids was observed in the variants with 20 t/daa of compost and 20 t/daa of vermicompost (9.1 and 8.9% relative to the control). Safflower is a plant which is tolerant to heavy metals and can be successfully used in the phytoremediation of heavy metal contaminated soils. The processing of seeds to oil and using the obtained oil for nutritional purposes will greatly reduce the cost of phytoremediation.Keywords: Heavy metals, organic amendments, phytoremediation, safflower.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2792435 Testing Object-Oriented Framework Applications Using FIST2 Tool: A Case Study
Authors: Jehad Al Dallal
Abstract:
An application framework provides a reusable design and implementation for a family of software systems. Frameworks are introduced to reduce the cost of a product line (i.e., a family of products that shares the common features). Software testing is a timeconsuming and costly ongoing activity during the application software development process. Generating reusable test cases for the framework applications during the framework development stage, and providing and using the test cases to test part of the framework application whenever the framework is used reduces the application development time and cost considerably. This paper introduces the Framework Interface State Transition Tester (FIST2), a tool for automated unit testing of Java framework applications. During the framework development stage, given the formal descriptions of the framework hooks, the specifications of the methods of the framework-s extensible classes, and the illegal behavior description of the Framework Interface Classes (FICs), FIST2 generates unitlevel test cases for the classes. At the framework application development stage, given the customized method specifications of the implemented FICs, FIST2 automates the use, execution, and evaluation of the already generated test cases to test the implemented FICs. The paper illustrates the use of the FIST2 tool for testing several applications that use the SalesPoint framework.Keywords: Automated testing, class testing, FICs, FIST2, object-oriented framework, object-oriented testing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1618434 Taguchi-Based Optimization of Surface Roughness and Dimensional Accuracy in Wire EDM Process with S7 Heat Treated Steel
Authors: Joseph C. Chen, Joshua Cox
Abstract:
This research focuses on the use of the Taguchi method to reduce the surface roughness and improve dimensional accuracy of parts machined by Wire Electrical Discharge Machining (EDM) with S7 heat treated steel material. Due to its high impact toughness, the material is a candidate for a wide variety of tooling applications which require high precision in dimension and desired surface roughness. This paper demonstrates that Taguchi Parameter Design methodology is able to optimize both dimensioning and surface roughness successfully by investigating seven wire-EDM controllable parameters: pulse on time (ON), pulse off time (OFF), servo voltage (SV), voltage (V), servo feed (SF), wire tension (WT), and wire speed (WS). The temperature of the water in the Wire EDM process is investigated as the noise factor in this research. Experimental design and analysis based on L18 Taguchi orthogonal arrays are conducted. This paper demonstrates that the Taguchi-based system enables the wire EDM process to produce (1) high precision parts with an average of 0.6601 inches dimension, while the desired dimension is 0.6600 inches; and (2) surface roughness of 1.7322 microns which is significantly improved from 2.8160 microns.
Keywords: Taguchi parameter design, surface roughness, dimensional accuracy, Wire EDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1088433 Investigating the Potential for Introduction of Warm Mix Asphalt in Kuwait Using the Volcanic Ash
Authors: H. Al-Baghli, F. Al-Asfour
Abstract:
The current applied asphalt technology for Kuwait roads pavement infrastructure is the hot mix asphalt (HMA) pavement, including both pen grade and polymer modified bitumen (PMBs), that is produced and compacted at high temperature levels ranging from 150 to 180 °C. There are no current specifications for warm and cold mix asphalts in Kuwait’s Ministry of Public Works (MPW) asphalt standard and specifications. The process of the conventional HMA is energy intensive and directly responsible for the emission of greenhouse gases and other environmental hazards into the atmosphere leading to significant environmental impacts and raising health risk to labors at site. Warm mix asphalt (WMA) technology, a sustainable alternative preferred in multiple countries, has many environmental advantages because it requires lower production temperatures than HMA by 20 to 40 °C. The reduction of temperatures achieved by WMA originates from multiple technologies including foaming and chemical or organic additives that aim to reduce bitumen and improve mix workability. This paper presents a literature review of WMA technologies and techniques followed by an experimental study aiming to compare the results of produced WMA samples, using a water containing additive (foaming process), at different compaction temperatures with the HMA control volumetric properties mix designed in accordance to the new MPW’s specifications and guidelines.
Keywords: Warm-mix asphalt, water-bearing additives, foaming-based process, chemical additives, organic additives.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 498432 Intelligent Transport System: Classification of Traffic Signs Using Deep Neural Networks in Real Time
Authors: Anukriti Kumar, Tanmay Singh, Dinesh Kumar Vishwakarma
Abstract:
Traffic control has been one of the most common and irritating problems since the time automobiles have hit the roads. Problems like traffic congestion have led to a significant time burden around the world and one significant solution to these problems can be the proper implementation of the Intelligent Transport System (ITS). It involves the integration of various tools like smart sensors, artificial intelligence, position technologies and mobile data services to manage traffic flow, reduce congestion and enhance driver's ability to avoid accidents during adverse weather. Road and traffic signs’ recognition is an emerging field of research in ITS. Classification problem of traffic signs needs to be solved as it is a major step in our journey towards building semi-autonomous/autonomous driving systems. The purpose of this work focuses on implementing an approach to solve the problem of traffic sign classification by developing a Convolutional Neural Network (CNN) classifier using the GTSRB (German Traffic Sign Recognition Benchmark) dataset. Rather than using hand-crafted features, our model addresses the concern of exploding huge parameters and data method augmentations. Our model achieved an accuracy of around 97.6% which is comparable to various state-of-the-art architectures.
Keywords: Multiclass classification, convolution neural network, OpenCV, Data Augmentation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814431 Design of Compliant Mechanism Based Microgripper with Three Finger Using Topology Optimization
Authors: R. Bharanidaran, B. T. Ramesh
Abstract:
High precision in motion is required to manipulate the micro objects in precision industries for micro assembly, cell manipulation etc. Precision manipulation is achieved based on the appropriate mechanism design of micro devices such as microgrippers. Design of a compliant based mechanism is the better option to achieve a highly precised and controlled motion. This research article highlights the method of designing a compliant based three fingered microgripper suitable for holding asymmetric objects. Topological optimization technique, a systematic method is implemented in this research work to arrive a topologically optimized design of the mechanism needed to perform the required micro motion of the gripper. Optimization technique has a drawback of generating senseless regions such as node to node connectivity and staircase effect at the boundaries. Hence, it is required to have post processing of the design to make it manufacturable. To reduce the effect of post processing stage and to preserve the edges of the image, a cubic spline interpolation technique is introduced in the MATLAB program. Structural performance of the topologically developed mechanism design is tested using finite element method (FEM) software. Further the microgripper structure is examined to find its fatigue life and vibration characteristics.
Keywords: Compliant mechanism, Cubic spline interpolation, FEM, Topology optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3579430 An Investigation to Study the Moisture Dependency of Ground Enhancement Compound
Authors: Arunima Shukla, Vikas Almadi, Devesh Jaiswal, Sunil Saini, Bhusan S. Patil
Abstract:
Lightning protection consists of three main parts; mainly air termination system, down conductor, and earth termination system. Earth termination system is the most important part as earth is the sink and source of charges. Therefore, even when the charges are captured and delivered to the ground, and an easy path is not provided to the charges, earth termination system would lead to problems. Soil has significantly different resistivities ranging from 10 Ωm for wet organic soil to 10000 Ωm for bedrock. Different methods have been discussed and used conventionally such as deep-ground-well method and altering the length of the rod. Those methods are not considered economical. Therefore, it was a general practice to use charcoal along with salt to reduce the soil resistivity. Bentonite is worldwide acceptable material, that had led our interest towards study of bentonite at first. It was concluded that bentonite is a clay which is non-corrosive, environment friendly. Whereas bentonite is suitable only when there is moisture present in the soil, as in the absence of moisture, cracks will appear on the surface which will provide an open passage to the air, resulting into increase in the resistivity. Furthermore, bentonite without moisture does not have enough bonding property, moisture retention, conductivity, and non-leachability. Therefore, bentonite was used along with the other backfill material to overcome the dependency of bentonite on moisture. Different experiments were performed to get the best ratio of bentonite and carbon backfill. It was concluded that properties will highly depend on the quantity of bentonite and carbon-based backfill material.
Keywords: Backfill material, bentonite, conducting soil, grounding material, low resistivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 444429 Analysis and Evaluation of the Public Responses to Traffic Congestion Pricing Schemes in Urban Streets
Authors: Saeed Sayyad Hagh Shomar
Abstract:
Traffic congestion pricing in urban streets is one of the most suitable options for solving the traffic problems and environment pollutions in the cities of the country. Unlike its acceptable outcomes, there are problems concerning the necessity to pay by the mass. Regarding the fact that public response in order to succeed in this strategy is so influential, studying their response and behavior to get the feedback and improve the strategies is of great importance. In this study, a questionnaire was used to examine the public reactions to the traffic congestion pricing schemes at the center of Tehran metropolis and the factors involved in people’s decision making in accepting or rejecting the congestion pricing schemes were assessed based on the data obtained from the questionnaire as well as the international experiences. Then, by analyzing and comparing the schemes, guidelines to reduce public objections to them are discussed. The results of reviewing and evaluating the public reactions show that all the pros and cons must be considered to guarantee the success of these projects. Consequently, with targeted public education and consciousness-raising advertisements, prior to initiating a scheme and ensuring the mechanism of the implementation after the start of the project, the initial opposition is reduced and, with the gradual emergence of the real and tangible benefits of its implementation, users’ satisfaction will increase.
Keywords: Demand management, international experiences, traffic congestion pricing, public acceptance, public objection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 651428 Real-World PM, PN and NOx Emission Differences among DOC+CDPF Retrofit Diesel-, Diesel- and Natural Gas-Fueled Buses
Authors: Zhiwen Yang, Jingyuan Li, Zhenkai Xie, Jian Ling, Jiguang Wang, Mengliang Li
Abstract:
To reflect the influence of after-treatment system retrofit and natural gas-fueled vehicle replace on exhaust emissions emitted by urban buses, a portable emission measurement system (PEMS) was employed herein to conduct real driving emission measurements. This study investigated the differences in particle number (PN), particle mass (PM), and nitrogen oxides (NOx) emissions from a China IV diesel bus retrofitted by catalyzed diesel particulate filter (CDPF), a China IV diesel bus, and a China V natural gas bus. The results show that both tested diesel buses possess markedly advantages in NOx emission control when compared to the lean-burn natural gas bus equipped without any NOx after-treatment system. As to PN and PM, only the DOC+CDPF retrofitting diesel bus exhibits enormous benefits on emission control related to the natural gas bus, especially the normal diesel bus. Meanwhile, the differences in PM and PN emissions between retrofitted and normal diesel buses generally increase with the increase in vehicle specific power (VSP). Furthermore, the differences in PM emissions, especially those in the higher VSP ranges, are more significant than those in PN. In addition, the maximum peak PN particle size (32 nm) of the retrofitted diesel bus was significantly lower than that of the normal diesel bus (100 nm). These phenomena indicate that the CDPF retrofitting can effectively reduce diesel bus exhaust particle emissions, especially those with large particle sizes.
Keywords: CDPF, diesel, natural gas, real-world emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475427 Motion Prediction and Motion Vector Cost Reduction during Fast Block Motion Estimation in MCTF
Authors: Karunakar A K, Manohara Pai M M
Abstract:
In 3D-wavelet video coding framework temporal filtering is done along the trajectory of motion using Motion Compensated Temporal Filtering (MCTF). Hence computationally efficient motion estimation technique is the need of MCTF. In this paper a predictive technique is proposed in order to reduce the computational complexity of the MCTF framework, by exploiting the high correlation among the frames in a Group Of Picture (GOP). The proposed technique applies coarse and fine searches of any fast block based motion estimation, only to the first pair of frames in a GOP. The generated motion vectors are supplied to the next consecutive frames, even to subsequent temporal levels and only fine search is carried out around those predicted motion vectors. Hence coarse search is skipped for all the motion estimation in a GOP except for the first pair of frames. The technique has been tested for different fast block based motion estimation algorithms over different standard test sequences using MC-EZBC, a state-of-the-art scalable video coder. The simulation result reveals substantial reduction (i.e. 20.75% to 38.24%) in the number of search points during motion estimation, without compromising the quality of the reconstructed video compared to non-predictive techniques. Since the motion vectors of all the pair of frames in a GOP except the first pair will have value ±1 around the motion vectors of the previous pair of frames, the number of bits required for motion vectors is also reduced by 50%.Keywords: Motion Compensated Temporal Filtering, predictivemotion estimation, lifted wavelet transform, motion vector
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1619426 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.
Keywords: Iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, scale invariant feature transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883425 Design Criteria for Achieving Acceptable Indoor Radon Concentration
Authors: T. Valdbjørn Rasmussen
Abstract:
Design criteria for achieving an acceptable indoor radon concentration are presented in this paper. The paper suggests three design criteria. These criteria have to be considered at the early stage of the building design phase to meet the latest recommendations from the World Health Organization in most countries. The three design criteria are; first, establishing a radon barrier facing the ground; second, lowering the air pressure in the lower zone of the slab on ground facing downwards; third, diluting the indoor air with outdoor air. The first two criteria can prevent radon from infiltrating from the ground, and the third criteria can dilute the indoor air. By combining these three criteria, the indoor radon concentration can be lowered achieving an acceptable level. In addition, a cheap and reliable method for measuring the radon concentration in the indoor air is described. The provision on radon in the Danish Building Regulations complies with the latest recommendations from the World Health Organization. Radon can cause lung cancer and it is not known whether there is a lower limit for when it is not harmful to human beings. Therefore, it is important to reduce the radon concentration as much as possible in buildings. Airtightness is an important factor when dealing with buildings. It is important to avoid air leakages in the building envelope both facing the atmosphere, e.g. in compliance with energy requirements, but also facing the ground, to meet the requirements to ensure and control the indoor environment. Infiltration of air from the ground underneath a building is the main providing source of radon to the indoor air.Keywords: Radon, natural radiation, barrier, pressure lowering, ventilation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1190424 Production Planning for Animal Food Industry under Demand Uncertainty
Authors: Pirom Thangchitpianpol, Suttipong Jumroonrut
Abstract:
This research investigates the distribution of food demand for animal food and the optimum amount of that food production at minimum cost. The data consist of customer purchase orders for the food of laying hens, price of food for laying hens, cost per unit for the food inventory, cost related to food of laying hens in which the food is out of stock, such as fine, overtime, urgent purchase for material. They were collected from January, 1990 to December, 2013 from a factory in Nakhonratchasima province. The collected data are analyzed in order to explore the distribution of the monthly food demand for the laying hens and to see the rate of inventory per unit. The results are used in a stochastic linear programming model for aggregate planning in which the optimum production or minimum cost could be obtained. Programming algorithms in MATLAB and tools in Linprog software are used to get the solution. The distribution of the food demand for laying hens and the random numbers are used in the model. The study shows that the distribution of monthly food demand for laying has a normal distribution, the monthly average amount (unit: 30 kg) of production from January to December. The minimum total cost average for 12 months is Baht 62,329,181.77. Therefore, the production planning can reduce the cost by 14.64% from real cost.
Keywords: Animal food, Stochastic linear programming, Production planning, Demand Uncertainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915423 A Computational Study on Flow Separation Control of Humpback Whale Inspired Sinusoidal Hydrofoils
Authors: J. Joy, T. H. New, I. H. Ibrahim
Abstract:
A computational study on bio-inspired NACA634-021 hydrofoils with leading-edge protuberances has been carried out to investigate their hydrodynamic flow control characteristics at a Reynolds number of 14,000 and different angles-of-attack. The numerical simulations were performed using ANSYS FLUENT and based on Reynolds-Averaged Navier-Stokes (RANS) solver mode incorporated with k-ω Shear Stress Transport (SST) turbulence model. The results obtained indicate varying flow phenomenon along the peaks and troughs over the span of the hydrofoils. Compared to the baseline hydrofoil with no leading-edge protuberances, the leading-edge modified hydrofoils tend to reduce flow separation extents along the peak regions. In contrast, there are increased flow separations in the trough regions of the hydrofoil with leading-edge protuberances. Interestingly, it was observed that dissimilar flow separation behaviour is produced along different peak- or trough-planes along the hydrofoil span, even though the troughs or peaks are physically similar at each interval for a particular hydrofoil. Significant interactions between adjacent flow structures produced by the leading-edge protuberances have also been observed. These flow interactions are believed to be responsible for the dissimilar flow separation behaviour along physically similar peak- or trough-planes.Keywords: Computational Fluid Dynamics, Flow separation control, Hydrofoils, Leading-edge protuberances.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016