Search results for: electrochemical techniques
133 Combined Source and Channel Coding for Image Transmission Using Enhanced Turbo Codes in AWGN and Rayleigh Channel
Authors: N. S. Pradeep, M. Balasingh Moses, V. Aarthi
Abstract:
Any signal transmitted over a channel is corrupted by noise and interference. A host of channel coding techniques has been proposed to alleviate the effect of such noise and interference. Among these Turbo codes are recommended, because of increased capacity at higher transmission rates and superior performance over convolutional codes. The multimedia elements which are associated with ample amount of data are best protected by Turbo codes. Turbo decoder employs Maximum A-posteriori Probability (MAP) and Soft Output Viterbi Decoding (SOVA) algorithms. Conventional Turbo coded systems employ Equal Error Protection (EEP) in which the protection of all the data in an information message is uniform. Some applications involve Unequal Error Protection (UEP) in which the level of protection is higher for important information bits than that of other bits. In this work, enhancement to the traditional Log MAP decoding algorithm is being done by using optimized scaling factors for both the decoders. The error correcting performance in presence of UEP in Additive White Gaussian Noise channel (AWGN) and Rayleigh fading are analyzed for the transmission of image with Discrete Cosine Transform (DCT) as source coding technique. This paper compares the performance of log MAP, Modified log MAP (MlogMAP) and Enhanced log MAP (ElogMAP) algorithms used for image transmission. The MlogMAP algorithm is found to be best for lower Eb/N0 values but for higher Eb/N0 ElogMAP performs better with optimized scaling factors. The performance comparison of AWGN with fading channel indicates the robustness of the proposed algorithm. According to the performance of three different message classes, class3 would be more protected than other two classes. From the performance analysis, it is observed that ElogMAP algorithm with UEP is best for transmission of an image compared to Log MAP and MlogMAP decoding algorithms.Keywords: AWGN, BER, DCT, Fading, MAP, UEP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1679132 Professional Burn out of Teachers: Reasons and Regularities
Authors: Dabyltayeva R. Y., Smatova K.B., Кabekenov G., Toleshova U., Shagyrbayeva M.
Abstract:
In recent years in Kazakhstan, as well as in all countries, we have been talking not only about the professional stress, but also professional Burnout Syndrome of employees. Burnout is essentially a response to chronic emotional stress – manifests itself in the form of chronic fatigue, despondency, unmotivated aggression, anger, and others. This condition is due to mental fatigue among teachers as a sort of payment for overstrain when professional commitments include the impact of “heat your soul", emotional investment. The emergence of professional Burnout among teachers is due to the system of interrelated and mutually reinforcing factors relating to the various levels of the personality: individually-psychological level is psychodynamic special subject characteristics of valuemotivational sphere and formation of skills and habits of selfregulation; the socio-psychological level includes especially the Organization and interpersonal interaction of a teacher. Signs of the Burnout were observed in 15 testees, and virtually a symptom could be observed in every teacher. As a result of the diagnosis 48% of teachers had the signs of stress (phase syndrome), resulting in a sense of anxiety, mood, heightened emotional susceptibility. The following results have also been got:-the fall of General energy potential – 14 pers. -Psychosomatic and psycho vegetative syndrome – 26 pers. -emotional deficit-34 pers. -emotional Burnout Syndrome-6 pers. The problem of professional Burnout of teachers in the current conditions should become not only meaningful, but particularly relevant. The quality of education of the younger generation depends on professional development; teachers- training level, and how “healthy" teachers are. That is why the systematic maintenance of pedagogic-professional development for teachers (including disclosure of professional Burnout Syndrome factors) takes on a special meaning.
Keywords: Professional burnout syndrome, adaptive syndrome, stage of depletion syndrome, symptoms and characteristics of burnout, prophylactic of professional destruction techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2105131 Performance Analysis of Ferrocement Retrofitted Masonry Wall Units under Cyclic Loading
Authors: Raquib Ahsan, Md. Mahir Asif, Md. Zahidul Alam
Abstract:
A huge portion of old masonry buildings in Bangladesh are vulnerable to earthquake. In most of the cases these buildings contain unreinforced masonry wall which are most likely to be subjected to earthquake damages. Due to deterioration of mortar joint and aging, shear resistance of these unreinforced masonry walls dwindle. So, retrofitting of these old buildings has become an important issue. Among many researched and experimented techniques, ferrocement retrofitting can be a low cost technique in context of the economic condition of Bangladesh. This study aims at investigating the behavior of ferrocement retrofitted unconfined URM walls under different types of cyclic loading. Four 725 mm × 725 mm masonry wall units were prepared with bricks jointed by stretcher bond with 12.5 mm mortar between two adjacent layers of bricks. To compare the effectiveness of ferrocement retrofitting a particular type wire mesh was used in this experiment which is 20 gauge woven wire mesh with 12.5 mm × 12.5 mm square opening. After retrofitting with ferrocement these wall units were tested by applying cyclic deformation along the diagonals of the specimens. Then a comparative study was performed between the retrofitted specimens and control specimens for both partially reversed cyclic load condition and cyclic compression load condition. The experiment results show that ultimate load carrying capacities of ferrocement retrofitted specimens are 35% and 27% greater than the control specimen under partially reversed cyclic loading and cyclic compression respectively. And before failure the deformations of ferrocement retrofitted specimens are 43% and 33% greater than the control specimen under reversed cyclic loading and cyclic compression respectively. Therefore, the test results show that the ultimate load carrying capacity and ductility of ferrocement retrofitted specimens have improved.
Keywords: Cyclic compression, ferrocement, masonry wall, partially reversed cyclic load, retrofitting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922130 Assessment of Wastewater Reuse Potential for an Enamel Coating Industry
Authors: Guclu Insel, Efe Gumuslu, Gulten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tugba Olmez Hanci, Didem Okutman Tas, Fatos Germirli Babuna, Derya Firat Ertem, Okmen Yildirim, Ozge Erturan, Betul Kirci
Abstract:
In order to eliminate water scarcity problems, effective precautions must be taken. Growing competition for water is increasingly forcing facilities to tackle their own water scarcity problems. At this point, application of wastewater reclamation and reuse results in considerable economic advantageous. In this study, an enamel coating facility, which is one of the high water consumed facilities, is evaluated in terms of its wastewater reuse potential. Wastewater reclamation and reuse can be defined as one of the best available techniques for this sector. Hence, process and pollution profiles together with detailed characterization of segregated wastewater sources are appraised in a way to find out the recoverable effluent streams arising from enamel coating operations. Daily, 170 m3 of process water is required and 160 m3 of wastewater is generated. The segregated streams generated by two enamel coating processes are characterized in terms of conventional parameters. Relatively clean segregated wastewater streams (reusable wastewaters) are separately collected and experimental treatability studies are conducted on it. The results reflected that the reusable wastewater fraction has an approximate amount of 110 m3/day that accounts for 68% of the total wastewaters. The need for treatment applicable on reusable wastewaters is determined by considering water quality requirements of various operations and characterization of reusable wastewater streams. Ultra-filtration (UF), Nano-filtration (NF) and Reverse Osmosis (RO) membranes are subsequently applied on reusable effluent fraction. Adequate organic matter removal is not obtained with the mentioned treatment sequence.Keywords: enamel coating, membrane, reuse, wastewater
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1490129 Estimating the Costs of Conservation in Multiple Output Agricultural Setting
Authors: T. Chaiechi, N. Stoeckl
Abstract:
Scarcity of resources for biodiversity conservation gives rise to the need of strategic investment with priorities given to the cost of conservation. While the literature provides abundant methodological options for biodiversity conservation; estimating true cost of conservation remains abstract and simplistic, without recognising dynamic nature of the cost. Some recent works demonstrate the prominence of economic theory to inform biodiversity decisions, particularly on the costs and benefits of biodiversity however, the integration of the concept of true cost into biodiversity actions and planning are very slow to come by, and specially on a farm level. Conservation planning studies often use area as a proxy for costs neglecting different land values as well as protected areas. These literature consider only heterogeneous benefits while land costs are considered homogenous. Analysis with the assumption of cost homogeneity results in biased estimation; since not only it doesn’t address the true total cost of biodiversity actions and plans, but also it fails to screen out lands that are more (or less) expensive and/or difficult (or more suitable) for biodiversity conservation purposes, hindering validity and comparability of the results. Economies of scope” is one of the other most neglected aspects in conservation literature. The concept of economies of scope introduces the existence of cost complementarities within a multiple output production system and it suggests a lower cost during the concurrent production of multiple outputs by a given farm. If there are, indeed, economies of scope then simplistic representation of costs will tend to overestimate the true cost of conservation leading to suboptimal outcomes. The aim of this paper, therefore, is to provide first road review of the various theoretical ways in which economies of scope are likely to occur of how they might occur in conservation. Consequently, the paper addresses gaps that have to be filled in future analysis.
Keywords: Cost, biodiversity conservation, Multi-output production systems, Empirical techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2207128 A Temporal QoS Ontology for ERTMS/ETCS
Authors: Marc Sango, Olimpia Hoinaru, Christophe Gransart, Laurence Duchien
Abstract:
Ontologies offer a means for representing and sharing information in many domains, particularly in complex domains. For example, it can be used for representing and sharing information of System Requirement Specification (SRS) of complex systems like the SRS of ERTMS/ETCS written in natural language. Since this system is a real-time and critical system, generic ontologies, such as OWL and generic ERTMS ontologies provide minimal support for modeling temporal information omnipresent in these SRS documents. To support the modeling of temporal information, one of the challenges is to enable representation of dynamic features evolving in time within a generic ontology with a minimal redesign of it. The separation of temporal information from other information can help to predict system runtime operation and to properly design and implement them. In addition, it is helpful to provide a reasoning and querying techniques to reason and query temporal information represented in the ontology in order to detect potential temporal inconsistencies. To address this challenge, we propose a lightweight 3-layer temporal Quality of Service (QoS) ontology for representing, reasoning and querying over temporal and non-temporal information in a complex domain ontology. Representing QoS entities in separated layers can clarify the distinction between the non QoS entities and the QoS entities in an ontology. The upper generic layer of the proposed ontology provides an intuitive knowledge of domain components, specially ERTMS/ETCS components. The separation of the intermediate QoS layer from the lower QoS layer allows us to focus on specific QoS Characteristics, such as temporal or integrity characteristics. In this paper, we focus on temporal information that can be used to predict system runtime operation. To evaluate our approach, an example of the proposed domain ontology for handover operation, as well as a reasoning rule over temporal relations in this domain-specific ontology, are presented.
Keywords: System Requirement Specification, ERTMS/ETCS, Temporal Ontologies, Domain Ontologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3135127 Design and Development of Constant Stress Composite Cantilever Beam
Authors: Vinod B. Suryawanshi, Ajit D. Kelkar
Abstract:
Composite materials, due to their unique properties such as high strength to weight ratio, corrosion resistance, and impact resistance have huge potential as structural materials in automotive, construction and transportation applications. However, these properties often come at higher cost owing to complex design methods, difficult manufacturing processes and raw material cost. Traditionally, tapered laminated composite structures are manufactured using autoclave manufacturing process by ply drop off technique. Autoclave manufacturing though very powerful suffers from high capital investment and higher energy consumption. As per the current trends in composite manufacturing, Out of Autoclave (OoA) processes are looked as emerging technologies for manufacturing the structural composite components for aerospace and defense applications. However, there is a need for improvement among these processes to make them reliable and consistent. In this paper, feasibility of using out of autoclave process to manufacture the variable thickness cantilever beam is discussed. The minimum weight design for the composite beam is obtained using constant stress beam concept by tailoring the thickness of the beam. Ply drop off techniques was used to fabricate the variable thickness beam from glass/epoxy prepregs. Experiments were conducted to measure bending stresses along the span of the cantilever beam at different intervals by applying the concentrated load at the free end. Experimental results showed that the stresses in the bean at different intervals were constant. This proves the ability of OoA process to manufacture the constant stress beam. Finite element model for the constant stress beam was developed using commercial finite element simulation software. It was observed that the simulation results agreed very well with the experimental results and thus validated design and manufacturing approach used.
Keywords: Beams, Composites, Constant Stress, Structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4393126 In situ Real-Time Multivariate Analysis of Methanolysis Monitoring of Sunflower Oil Using FTIR
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The combination of world population and the third industrial revolution led to high demand for fuels. On the other hand, the decrease of global fossil 8fuels deposits and the environmental air pollution caused by these fuels has compounded the challenges the world faces due to its need for energy. Therefore, new forms of environmentally friendly and renewable fuels such as biodiesel are needed. The primary analytical techniques for methanolysis yield monitoring have been chromatography and spectroscopy, these methods have been proven reliable but are more demanding, costly and do not provide real-time monitoring. In this work, the in situ monitoring of biodiesel from sunflower oil using FTIR (Fourier Transform Infrared) has been studied; the study was performed using EasyMax Mettler Toledo reactor equipped with a DiComp (Diamond) probe. The quantitative monitoring of methanolysis was performed by building a quantitative model with multivariate calibration using iC Quant module from iC IR 7.0 software. 15 samples of known concentrations were used for the modelling which were taken in duplicate for model calibration and cross-validation, data were pre-processed using mean centering and variance scale, spectrum math square root and solvent subtraction. These pre-processing methods improved the performance indexes from 7.98 to 0.0096, 11.2 to 3.41, 6.32 to 2.72, 0.9416 to 0.9999, RMSEC, RMSECV, RMSEP and R2Cum, respectively. The R2 value of 1 (training), 0.9918 (test), 0.9946 (cross-validation) indicated the fitness of the model built. The model was tested against univariate model; small discrepancies were observed at low concentration due to unmodelled intermediates but were quite close at concentrations above 18%. The software eliminated the complexity of the Partial Least Square (PLS) chemometrics. It was concluded that the model obtained could be used to monitor methanol of sunflower oil at industrial and lab scale.
Keywords: Biodiesel, calibration, chemometrics, FTIR, methanolysis, multivariate analysis, transesterification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 934125 Meta Model Based EA for Complex Optimization
Authors: Maumita Bhattacharya
Abstract:
Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, many real life optimization problems often require finding optimal solution to complex high dimensional, multimodal problems involving computationally very expensive fitness function evaluations. Use of evolutionary algorithms in such problem domains is thus practically prohibitive. An attractive alternative is to build meta models or use an approximation of the actual fitness functions to be evaluated. These meta models are order of magnitude cheaper to evaluate compared to the actual function evaluation. Many regression and interpolation tools are available to build such meta models. This paper briefly discusses the architectures and use of such meta-modeling tools in an evolutionary optimization context. We further present two evolutionary algorithm frameworks which involve use of meta models for fitness function evaluation. The first framework, namely the Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model [14] reduces computation time by controlled use of meta-models (in this case approximate model generated by Support Vector Machine regression) to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the metamodel are generated from a single uniform model. This does not take into account uncertain scenarios involving noisy fitness functions. The second model, DAFHEA-II, an enhanced version of the original DAFHEA framework, incorporates a multiple-model based learning approach for the support vector machine approximator to handle noisy functions [15]. Empirical results obtained by evaluating the frameworks using several benchmark functions demonstrate their efficiencyKeywords: Meta model, Evolutionary algorithm, Stochastictechnique, Fitness function, Optimization, Support vector machine.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067124 Reinforcing Effects of Natural Micro-Particles on the Dynamic Impact Behaviour of Hybrid Bio-Composites Made of Short Kevlar Fibers Reinforced Thermoplastic Composite Armor
Authors: Edison E. Haro, Akindele G. Odeshi, Jerzy A. Szpunar
Abstract:
Hybrid bio-composites are developed for use in protective armor through positive hybridization offered by reinforcement of high-density polyethylene (HDPE) with Kevlar short fibers and palm wood micro-fillers. The manufacturing process involved a combination of extrusion and compression molding techniques. The mechanical behavior of Kevlar fiber reinforced HDPE with and without palm wood filler additions are compared. The effect of the weight fraction of the added palm wood micro-fillers is also determined. The Young modulus was found to increase as the weight fraction of organic micro-particles increased. However, the flexural strength decreased with increasing weight fraction of added micro-fillers. The interfacial interactions between the components were investigated using scanning electron microscopy. The influence of the size, random alignment and distribution of the natural micro-particles was evaluated. Ballistic impact and dynamic shock loading tests were performed to determine the optimum proportion of Kevlar short fibers and organic micro-fillers needed to improve impact strength of the HDPE. These results indicate a positive hybridization by deposition of organic micro-fillers on the surface of short Kevlar fibers used in reinforcing the thermoplastic matrix leading to enhancement of the mechanical strength and dynamic impact behavior of these materials. Therefore, these hybrid bio-composites can be promising materials for different applications against high velocity impacts.
Keywords: Hybrid bio-composites, organic nano-fillers, dynamic shocking loading, ballistic impacts, energy absorption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 756123 Identification of Flexographic-printed Newspapers with NIR Spectral Imaging
Authors: Raimund Leitner, Susanne Rosskopf
Abstract:
Near-infrared (NIR) spectroscopy is a widely used method for material identification for laboratory and industrial applications. While standard spectrometers only allow measurements at one sampling point at a time, NIR Spectral Imaging techniques can measure, in real-time, both the size and shape of an object as well as identify the material the object is made of. The online classification and sorting of recovered paper with NIR Spectral Imaging (SI) is used with success in the paper recycling industry throughout Europe. Recently, the globalisation of the recycling material streams caused that water-based flexographic-printed newspapers mainly from UK and Italy appear also in central Europe. These flexo-printed newspapers are not sufficiently de-inkable with the standard de-inking process originally developed for offset-printed paper. This de-inking process removes the ink from recovered paper and is the fundamental processing step to produce high-quality paper from recovered paper. Thus, the flexo-printed newspapers are a growing problem for the recycling industry as they reduce the quality of the produced paper if their amount exceeds a certain limit within the recovered paper material. This paper presents the results of a research project for the development of an automated entry inspection system for recovered paper that was jointly conducted by CTR AG (Austria) and PTS Papiertechnische Stiftung (Germany). Within the project an NIR SI prototype for the identification of flexo-printed newspaper has been developed. The prototype can identify and sort out flexoprinted newspapers in real-time and achieves a detection accuracy for flexo-printed newspaper of over 95%. NIR SI, the technology the prototype is based on, allows the development of inspection systems for incoming goods in a paper production facility as well as industrial sorting systems for recovered paper in the recycling industry in the near future.Keywords: spectral imaging, imaging spectroscopy, NIR, waterbasedflexographic, flexo-printed, recovered paper, real-time classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545122 An Evaluation on the Effectiveness of a 3D Printed Composite Compression Mold
Authors: Peng Hao Wang, Garam Kim, Ronald Sterkenburg
Abstract:
The applications of composite materials within the aviation industry has been increasing at a rapid pace. However, the growing applications of composite materials have also led to growing demand for more tooling to support its manufacturing processes. Tooling and tooling maintenance represents a large portion of the composite manufacturing process and cost. Therefore, the industry’s adaptability to new techniques for fabricating high quality tools quickly and inexpensively will play a crucial role in composite material’s growing popularity in the aviation industry. One popular tool fabrication technique currently being developed involves additive manufacturing such as 3D printing. Although additive manufacturing and 3D printing are not entirely new concepts, the technique has been gaining popularity due to its ability to quickly fabricate components, maintain low material waste, and low cost. In this study, a team of Purdue University School of Aviation and Transportation Technology (SATT) faculty and students investigated the effectiveness of a 3D printed composite compression mold. A 3D printed composite compression mold was fabricated by 3D scanning a steel valve cover of an aircraft reciprocating engine. The 3D printed composite compression mold was used to fabricate carbon fiber versions of the aircraft reciprocating engine valve cover. The 3D printed composite compression mold was evaluated for its performance, durability, and dimensional stability while the fabricated carbon fiber valve covers were evaluated for its accuracy and quality. The results and data gathered from this study will determine the effectiveness of the 3D printed composite compression mold in a mass production environment and provide valuable information for future understanding, improvements, and design considerations of 3D printed composite molds.
Keywords: Additive manufacturing, carbon fiber, composite tooling, molds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708121 Development of Wave-Dissipating Block Installation Simulation for Inexperienced Worker Training
Authors: Hao Min Chuah, Tatsuya Yamazaki, Ryosui Iwasawa, Tatsumi Suto
Abstract:
In recent years, with the advancement of digital technology, the movement to introduce so-called ICT (Information and Communication Technology), such as computer technology and network technology, to civil engineering construction sites and construction sites is accelerating. As part of this movement, attempts are being made in various situations to reproduce actual sites inside computers and use them for designing and construction planning, as well as for training inexperienced engineers. The installation of wave-dissipating blocks on coasts, etc., is a type of work that has been carried out by skilled workers based on their years of experience and is one of the tasks that is difficult for inexperienced workers to carry out on site. Wave-dissipating blocks are structures that are designed to protect coasts, beaches, and so on from erosion by reducing the energy of ocean waves. Wave-dissipating blocks usually weigh more than 1 t and are installed by being suspended by a crane, so it would be time-consuming and costly for inexperienced workers to train on-site. In this paper, therefore, a block installation simulator is developed based on Unity 3D, a game development engine. The simulator computes porosity. Porosity is defined as the ratio of the total volume of the wave breaker blocks inside the structure to the final shape of the ideal structure. Using the evaluation of porosity, the simulator can determine how well the user is able to install the blocks. The voxelization technique is used to calculate the porosity of the structure, simplifying the calculations. Other techniques, such as raycasting and box overlapping, are employed for accurate simulation. In the near future, the simulator will install an automatic block installation algorithm based on combinatorial optimization solutions and compare the user-demonstrated block installation and the appropriate installation solved by the algorithm.
Keywords: 3D simulator, porosity, user interface, voxelization, wave-dissipating blocks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 69120 Multiple Criteria Decision Making Analysis for Selecting and Evaluating Fighter Aircraft
Authors: C. Ardil, A. M. Pashaev, R.A. Sadiqov, P. Abdullayev
Abstract:
In this paper, multiple criteria decision making analysis technique, is presented for ranking and selection of a set of determined alternatives - fighter aircraft - which are associated with a set of decision factors. In fighter aircraft design, conflicting decision criteria, disciplines, and technologies are always involved in the design process. Multiple criteria decision making analysis techniques can be helpful to effectively deal with such situations and make wise design decisions. Multiple criteria decision making analysis theory is a systematic mathematical approach for dealing with problems which contain uncertainties in decision making. The feasibility and contributions of applying the multiple criteria decision making analysis technique in fighter aircraft selection analysis is explored. In this study, an integrated framework incorporating multiple criteria decision making analysis technique in fighter aircraft analysis is established using entropy objective weighting method. An improved integrated multiple criteria decision making analysis method is utilized to aggregate the multiple decision criteria into one composite figure of merit, which serves as an objective function in the decision process. Therefore, it is demonstrated that the suitable multiple criteria decision making analysis method with decision solution provides an effective objective function for the decision making analysis. Considering that the inherent uncertainties and the weighting factors have crucial decision impacts on the fighter aircraft evaluation, seven fighter aircraft models for the multiple design criteria in terms of the weighting factors are constructed. The proposed multiple criteria decision making analysis model is based on integrated entropy index procedure, and additive multiple criteria decision making analysis theory. Hence, the applicability of proposed technique for fighter aircraft selection problem is considered. The constructed multiple criteria decision making analysis model can provide efficient decision analysis approach for uncertainty assessment of the decision problem. Consequently, the fighter aircraft alternatives are ranked based their final evaluation scores, and sensitivity analysis is conducted.
Keywords: Fighter Aircraft, Fighter Aircraft Selection, Multiple Criteria Decision Making, Multiple Criteria Decision Making Analysis, MCDMA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 626119 Efficiency Validation of Hybrid Cooling Application in Hot and Humid Climate Houses of KSA
Authors: Jamil Hijazi, Stirling Howieson
Abstract:
Reducing energy consumption and CO2 emissions are probably the greatest challenge now facing mankind. From considerations surrounding global warming and CO2 production, it has to be recognized that oil is a finite resource and the KSA like many other oil-rich countries will have to start to consider a horizon where hydro-carbons are not the dominant energy resource. The employment of hybrid ground-cooling pipes in combination with the black body solar collection and radiant night cooling systems may have the potential to displace a significant proportion of oil currently used to run conventional air conditioning plant. This paper presents an investigation into the viability of such hybrid systems with the specific aim of reducing cooling load and carbon emissions while providing all year-round thermal comfort in a typical Saudi Arabian urban housing block. Soil temperatures were measured in the city of Jeddah. A parametric study then was carried out by computational simulation software (DesignBuilder) that utilized the field measurements and predicted the cooling energy consumption of both a base case and an ideal scenario (typical block retro-fitted with insulation, solar shading, ground pipes integrated with hypocaust floor slabs/stack ventilation and radiant cooling pipes embed in floor). Initial simulation results suggest that careful ‘ecological design’ combined with hybrid radiant and ground pipe cooling techniques can displace air conditioning systems, producing significant cost and carbon savings (both capital and running) without appreciable deprivation of amenity.
Keywords: Cooling load, energy efficiency, ground pipe cooling, hybrid cooling strategy, hydronic radiant systems, low carbon emission, passive designs, thermal comfort.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 946118 Surface and Bulk Magnetization Behavior of Isolated Ferromagnetic NiFe Nanowires
Authors: Musaab Salman Sultan
Abstract:
The surface and bulk magnetization behavior of template released isolated ferromagnetic Ni60Fe40 nanowires of relatively thick diameters (~200 nm), deposited from a dilute suspension onto pre-patterned insulating chips have been investigated experimentally, using a highly sensitive Magneto-Optical Ker Effect (MOKE) magnetometry and Magneto-Resistance (MR) measurements, respectively. The MR data were consistent with the theoretical predictions of the anisotropic magneto-resistance (AMR) effect. The MR measurements, in all the angles of investigations, showed large features and a series of nonmonotonic "continuous small features" in the resistance profiles. The extracted switching fields from these features and from MOKE loops were compared with each other and with the switching fields reported in the literature that adopted the same analytical techniques on the similar compositions and dimensions of nanowires. A large difference between MOKE and MR measurments was noticed. The disparate between MOKE and MR results is attributed to the variance in the micro-magnetic structure of the surface and the bulk of such ferromagnetic nanowires. This result was ascertained using micro-magnetic simulations on an individual: cylindrical and rectangular cross sections NiFe nanowires, with the same diameter/thickness of the experimental wires, using the Object Oriented Micro-magnetic Framework (OOMMF) package where the simulated loops showed different switching events, indicating that such wires have different magnetic states in the reversal process and the micro-magnetic spin structures during switching behavior was complicated. These results further supported the difference between surface and bulk magnetization behavior in these nanowires. This work suggests that a combination of MOKE and MR measurements is required to fully understand the magnetization behavior of such relatively thick isolated cylindrical ferromagnetic nanowires.
Keywords: MOKE magnetometry, MR measurements, OOMMF package, micro-magnetic simulations, ferromagnetic nanowires, surface magnetic properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 763117 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization
Authors: Himanshu Shekhar Maharana, S. K .Dash
Abstract:
Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.
Keywords: Economic load dispatch, constriction factor based particle swarm optimization, dispersed particle swarm optimization, weight improved particle swarm optimization, ramp rate and constriction factor based particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1260116 Low Sulfur Diesel Like Fuel Oil from Quick Remediation Process of Waste Oil Sludge
Authors: Isam A. H. Al Zubaidi
Abstract:
Low sulfur diesel like fuel oil was produced from a quick remediation process of waste oil sludge (WOS). This quick process will reduce the volume of the WOS in petroleum refineries as well as oil fields by transferring the waste to more beneficial product. The practice includes mixing process of WOS with commercial diesel fuel. Different ratios of WOS to diesel fuel were prepared ranging 1:1 to 20:1 by mass. The mixture was continuously mixed for 10 minutes using a bench-type overhead stirrer, and followed by the filtration process to separate the soil waste from filtrate oil product. The quantity and the physical properties of the oil filtrate were measured. It was found that the addition of up to 15% WOS to diesel fuel was accepted without dramatic changes to the properties of diesel fuel. The amount of WOS was decreased by about 60% by mass. This means that about 60% of the mass of sludge was recovered as light fuel oil. The physical properties of the resulting fuel from 10% sludge mixing ratio showed that the specific gravity, ash content, carbon residue, asphaltene content, viscosity, diesel index, cetane number, and calorific value were affected slightly. The color was changed to light black. The sulfur content was increased also. This requires another process to reduce the sulfur content of resulting light fuel. A desulfurization process was achieved using adsorption techniques with activated biomaterial to reduce the sulfur content to acceptable limits. Adsorption process by ZnCl2 activated date palm kernel powder was effective for improvement of the physical properties of diesel like fuel. The final sulfur content was increased to 0.185 wt%. This diesel like fuel can be used in all tractors, buses, tracks inside and outside the refineries. The solid remaining seems to be smooth and can be mixed with asphalt mixture for asphalting the roads or can be used with other materials as asphalt coating material for constructed buildings. Through this process, valuable fuel has been recovered, and the amount of waste material had decreased.
Keywords: Oil sludge, diesel fuel, blending process, filtration process.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 344115 Taguchi Robust Design for Optimal Setting of Process Wastes Parameters in an Automotive Parts Manufacturing Company
Authors: Charles Chikwendu Okpala, Christopher Chukwutoo Ihueze
Abstract:
As a technique that reduces variation in a product by lessening the sensitivity of the design to sources of variation, rather than by controlling their sources, Taguchi Robust Design entails the designing of ideal goods, by developing a product that has minimal variance in its characteristics and also meets the desired exact performance. This paper examined the concept of the manufacturing approach and its application to brake pad product of an automotive parts manufacturing company. Although the firm claimed that only defects, excess inventory, and over-production were the few wastes that grossly affect their productivity and profitability, a careful study and analysis of their manufacturing processes with the application of Single Minute Exchange of Dies (SMED) tool showed that the waste of waiting is the fourth waste that bedevils the firm. The selection of the Taguchi L9 orthogonal array which is based on the four parameters and the three levels of variation for each parameter revealed that with a range of 2.17, that waiting is the major waste that the company must reduce in order to continue to be viable. Also, to enhance the company’s throughput and profitability, the wastes of over-production, excess inventory, and defects with ranges of 2.01, 1.46, and 0.82, ranking second, third, and fourth respectively must also be reduced to the barest minimum. After proposing -33.84 as the highest optimum Signal-to-Noise ratio to be maintained for the waste of waiting, the paper advocated for the adoption of all the tools and techniques of Lean Production System (LPS), and Continuous Improvement (CI), and concluded by recommending SMED in order to drastically reduce set up time which leads to unnecessary waiting.Keywords: Taguchi Robust Design, signal to noise ratio, Single Minute Exchange of Dies, lean production system, waste.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976114 Micropropagation and in vitro Conservation via Slow Growth Techniques of Prunus webbii (Spach) Vierh: An Endangered Plant Species in Albania
Authors: Valbona Sota, Efigjeni Kongjika
Abstract:
Wild almond is a woody species, which is difficult to propagate either generatively by seed or by vegetative methods (grafting or cuttings) and also considered as Endangered (EN) in Albania based on IUCN criteria. As a wild relative of cultivated fruit trees, this species represents a source of genetic variability and can be very important in breeding programs and cultivation. For this reason, it would be of interest to use an effective method of in vitro mid-term conservation, which involves strategies to slow plant growth through physicochemical alterations of in vitro growth conditions. Multiplication of wild almond was carried out using zygotic embryos, as primary explants, with the purpose to develop a successful propagation protocol. Results showed that zygotic embryos can proliferate through direct or indirect organogenesis. During subculture, stage was obtained a great number of new plantlets identical to mother plants derived from the zygotic embryos. All in vitro plantlets obtained from subcultures underwent in vitro conservation by minimal growth in low temperature (4ºC) and darkness. The efficiency of this technique was evaluated for 3, 6, and 10 months of conservation period. Maintenance in these conditions reduced micro cuttings growth. Survival and regeneration rates for each period were evaluated and resulted that the maximal time of conservation without subculture on 4ºC was 10 months, but survival and regeneration rates were significantly reduced, specifically 15.6% and 7.6%. An optimal period of conservation in these conditions can be considered the 5-6 months storage, which can lead to 60-50% of survival and regeneration rates. This protocol may be beneficial for mass propagation, mid-term conservation, and for genetic manipulation of wild almond.
Keywords: Micropropagation, minimal growth, storage, wild almond.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 794113 Classifying Turbomachinery Blade Mode Shapes Using Artificial Neural Networks
Authors: Ismail Abubakar, Hamid Mehrabi, Reg Morton
Abstract:
Currently, extensive signal analysis is performed in order to evaluate structural health of turbomachinery blades. This approach is affected by constraints of time and the availability of qualified personnel. Thus, new approaches to blade dynamics identification that provide faster and more accurate results are sought after. Generally, modal analysis is employed in acquiring dynamic properties of a vibrating turbomachinery blade and is widely adopted in condition monitoring of blades. The analysis provides useful information on the different modes of vibration and natural frequencies by exploring different shapes that can be taken up during vibration since all mode shapes have their corresponding natural frequencies. Experimental modal testing and finite element analysis are the traditional methods used to evaluate mode shapes with limited application to real live scenario to facilitate a robust condition monitoring scheme. For a real time mode shape evaluation, rapid evaluation and low computational cost is required and traditional techniques are unsuitable. In this study, artificial neural network is developed to evaluate the mode shape of a lab scale rotating blade assembly by using result from finite element modal analysis as training data. The network performance evaluation shows that artificial neural network (ANN) is capable of mapping the correlation between natural frequencies and mode shapes. This is achieved without the need of extensive signal analysis. The approach offers advantage from the perspective that the network is able to classify mode shapes and can be employed in real time including simplicity in implementation and accuracy of the prediction. The work paves the way for further development of robust condition monitoring system that incorporates real time mode shape evaluation.
Keywords: Modal analysis, artificial neural network, mode shape, natural frequencies, pattern recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 908112 Beneficial Use of Coal Combustion By-products in the Rehabilitation of Failed Asphalt Pavements
Authors: Tarunjit S. Butalia, William E. Wolfe
Abstract:
This study demonstrates the use of Class F fly ash in combination with lime or lime kiln dust in the full depth reclamation (FDR) of asphalt pavements. FDR, in the context of this paper, is a process of pulverizing a predetermined amount of flexible pavement that is structurally deficient, blending it with chemical additives and water, and compacting it in place to construct a new stabilized base course. Test sections of two structurally deficient asphalt pavements were reclaimed using Class F fly ash in combination with lime and lime kiln dust. In addition, control sections were constructed using cement, cement and emulsion, lime kiln dust and emulsion, and mill and fill. The service performance and structural behavior of the FDR pavement test sections were monitored to determine how the fly ash sections compared to other more traditional pavement rehabilitation techniques. Service performance and structural behavior were determined with the use of sensors embedded in the road and Falling Weight Deflectometer (FWD) tests. Monitoring results of the FWD tests conducted up to 2 years after reclamation show that the cement, fly ash+LKD, and fly ash+lime sections exhibited two year resilient modulus values comparable to open graded cement stabilized aggregates (more than 750 ksi). The cement treatment resulted in a significant increase in resilient modulus within 3 weeks of construction and beyond this curing time, the stiffness increase was slow. On the other hand, the fly ash+LKD and fly ash+lime test sections indicated slower shorter-term increase in stiffness. The fly ash+LKD and fly ash+lime section average resilient modulus values at two years after construction were in excess of 800 ksi. Additional longer-term testing data will be available from ongoing pavement performance and environmental condition data collection at the two pavement sites.Keywords: Coal fly ash, full depth reclamation, FWD, pavement rehabilitation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1946111 Geoelectical Resistivity Method in Aquifer Characterization at Opic Estate, Isheri-Osun River Basin, South Western Nigeria
Authors: B. R. Faleye, M. I. Titocan, M. P. Ibitola
Abstract:
Investigation was carried out at Opic Estate in Isheri-Osun River Basin environment using Electrical Resistivity method to study saltwater intrusion into a fresh water aquifer system from the proximal estuarine water body. The investigation is aimed at aquifer characterisation using electrical resistivity method in order to provide the depth to which fresh water fit for both domestic and industrial consumption. The 2D Electrical Resistivity and Vertical Electrical Resistivity techniques alongside Laboratory analysis of water samples obtained from the boreholes were adopted. Three traverses were investigated using Wenner and Pole-Dipole array with multi-electrode system consisting of 84 electrodes and a spread of 581 m, 664 m and 830 m were attained on the traverses. The main lithologies represented in the study area are Sand, Clay and Clayey Sand of which Sand constitutes the aquifer in the study area. Vertical Electrical Sounding data obtained at different lateral distance on the traverses have indicated that the water in the aquifer in the subsurface is brackish. Brackish water is represented by lowelectrical resistivity value signature while fresh water is characterized by relatively high electrical resistivity and in some regionfresh water is existent at depth greater than 200 m. Results of laboratory analysis of samples showed that the pH, Salinity, Total Dissolved Solid and Conductivity indicated existence of water with poor quality, indicating that salinity, TDS and Conductivity is higher in the Northern part of the study area. The 2D electrical resistivity and Vertical Electrical Sounding methods indicate that fresh water region is at ≥200m depth. Aquifers not fit for domestic use in the study area occur downwards to about 200 m in depth. In conclusion, it is recommended that wells should be sunkbeyond 220 m for the possible procurement of portable fresh water.
Keywords: 2D electrical resistivity, aquifer, brackish water, lithologies, freshwater, opic estate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 933110 Text Mining Technique for Data Mining Application
Authors: M. Govindarajan
Abstract:
Text Mining is around applying knowledge discovery techniques to unstructured text is termed knowledge discovery in text (KDT), or Text data mining or Text Mining. In decision tree approach is most useful in classification problem. With this technique, tree is constructed to model the classification process. There are two basic steps in the technique: building the tree and applying the tree to the database. This paper describes a proposed C5.0 classifier that performs rulesets, cross validation and boosting for original C5.0 in order to reduce the optimization of error ratio. The feasibility and the benefits of the proposed approach are demonstrated by means of medial data set like hypothyroid. It is shown that, the performance of a classifier on the training cases from which it was constructed gives a poor estimate by sampling or using a separate test file, either way, the classifier is evaluated on cases that were not used to build and evaluate the classifier are both are large. If the cases in hypothyroid.data and hypothyroid.test were to be shuffled and divided into a new 2772 case training set and a 1000 case test set, C5.0 might construct a different classifier with a lower or higher error rate on the test cases. An important feature of see5 is its ability to classifiers called rulesets. The ruleset has an error rate 0.5 % on the test cases. The standard errors of the means provide an estimate of the variability of results. One way to get a more reliable estimate of predictive is by f-fold –cross- validation. The error rate of a classifier produced from all the cases is estimated as the ratio of the total number of errors on the hold-out cases to the total number of cases. The Boost option with x trials instructs See5 to construct up to x classifiers in this manner. Trials over numerous datasets, large and small, show that on average 10-classifier boosting reduces the error rate for test cases by about 25%.Keywords: C5.0, Error Ratio, text mining, training data, test data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2489109 RF Permeability Test in SOC Structure for Establishing USN(Ubiquitous Sensor Network)
Authors: Byung – wan Jo, Jung – hoon Park, Jang - wook Kim
Abstract:
Recently, as information industry and mobile communication technology are developing, this study is conducted on the new concept of intelligent structures and maintenance techniques that applied wireless sensor network, USN (Ubiquitous Sensor Network), to social infrastructures such as civil and architectural structures on the basis of the concept of Ubiquitous Computing that invisibly provides human life with computing, along with mutually cooperating, compromising and connecting networks each other by having computers within all objects around us. Therefore, the purpose of this study is to investigate the capability of wireless communication of sensor node embedded in reinforced concrete structure with a basic experiment on an electric wave permeability of sensor node by fabricating molding with variables of concrete thickness and steel bars that are mostly used in constructing structures to determine the feasibility of application to constructing structures with USN. At this time, with putting the pitches of steel bars, the thickness of concrete placed, and the intensity of RF signal of a transmitter-receiver as variables and when wireless communication module was installed inside, the possible communication distance of plain concrete and the possible communication distance by the pitches of steel bars was measured in the horizontal and vertical direction respectively. Besides, for the precise measurement of diminution of an electric wave, the magnitude of an electric wave in the range of used frequencies was measured by using Spectrum Analyzer. The phenomenon of diminution of an electric wave was numerically analyzed and the effect of the length of wavelength of frequencies was analyzed by the properties of a frequency band area. As a result of studying the feasibility of an application to constructing structures with wireless sensor, in case of plain concrete, it shows 45cm for the depth of permeability and in case of reinforced concrete with the pitches of 5cm, it shows 37cm and 45cm for the pitches of 15cm.Keywords: Ubiquitous, Concrete, Permeability, Wireless, Sensor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1613108 SAF: A Substitution and Alignment Free Similarity Measure for Protein Sequences
Authors: Abdellali Kelil, Shengrui Wang, Ryszard Brzezinski
Abstract:
The literature reports a large number of approaches for measuring the similarity between protein sequences. Most of these approaches estimate this similarity using alignment-based techniques that do not necessarily yield biologically plausible results, for two reasons. First, for the case of non-alignable (i.e., not yet definitively aligned and biologically approved) sequences such as multi-domain, circular permutation and tandem repeat protein sequences, alignment-based approaches do not succeed in producing biologically plausible results. This is due to the nature of the alignment, which is based on the matching of subsequences in equivalent positions, while non-alignable proteins often have similar and conserved domains in non-equivalent positions. Second, the alignment-based approaches lead to similarity measures that depend heavily on the parameters set by the user for the alignment (e.g., gap penalties and substitution matrices). For easily alignable protein sequences, it's possible to supply a suitable combination of input parameters that allows such an approach to yield biologically plausible results. However, for difficult-to-align protein sequences, supplying different combinations of input parameters yields different results. Such variable results create ambiguities and complicate the similarity measurement task. To overcome these drawbacks, this paper describes a novel and effective approach for measuring the similarity between protein sequences, called SAF for Substitution and Alignment Free. Without resorting either to the alignment of protein sequences or to substitution relations between amino acids, SAF is able to efficiently detect the significant subsequences that best represent the intrinsic properties of protein sequences, those underlying the chronological dependencies of structural features and biochemical activities of protein sequences. Moreover, by using a new efficient subsequence matching scheme, SAF more efficiently handles protein sequences that contain similar structural features with significant meaning in chronologically non-equivalent positions. To show the effectiveness of SAF, extensive experiments were performed on protein datasets from different databases, and the results were compared with those obtained by several mainstream algorithms.Keywords: Protein, Similarity, Substitution, Alignment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1410107 Data Projects for “Social Good”: Challenges and Opportunities
Authors: Mikel Niño, Roberto V. Zicari, Todor Ivanov, Kim Hee, Naveed Mushtaq, Marten Rosselli, Concha Sánchez-Ocaña, Karsten Tolle, José Miguel Blanco, Arantza Illarramendi, Jörg Besier, Harry Underwood
Abstract:
One of the application fields for data analysis techniques and technologies gaining momentum is the area of social good or “common good”, covering cases related to humanitarian crises, global health care, or ecology and environmental issues, among others. The promotion of data-driven projects in this field aims at increasing the efficacy and efficiency of social initiatives, improving the way these actions help humanity in general and people in need in particular. This application field, however, poses its own barriers and challenges when developing data-driven projects, lagging behind in comparison with other scenarios. These challenges derive from aspects such as the scope and scale of the social issue to solve, cultural and political barriers, the skills of main stakeholders and the technological resources available, the motivation to be engaged in such projects, or the ethical and legal issues related to sensitive data. This paper analyzes the application of data projects in the field of social good, reviewing its current state and noteworthy initiatives, and presenting a framework covering the key aspects to analyze in such projects. The goal is to provide guidelines to understand the main challenges and opportunities for this type of data project, as well as identifying the main differential issues compared to “classical” data projects in general. A case study is presented on the initial steps and stakeholder analysis of a data project for the inclusion of refugees in the city of Frankfurt, Germany, in order to empirically confront the framework with a real example.Keywords: Data-Driven projects, humanitarian operations, personal and sensitive data, social good, stakeholders analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795106 Genetic Algorithm for In-Theatre Military Logistics Search-and-Delivery Path Planning
Authors: Jean Berger, Mohamed Barkaoui
Abstract:
Discrete search path planning in time-constrained uncertain environment relying upon imperfect sensors is known to be hard, and current problem-solving techniques proposed so far to compute near real-time efficient path plans are mainly bounded to provide a few move solutions. A new information-theoretic –based open-loop decision model explicitly incorporating false alarm sensor readings, to solve a single agent military logistics search-and-delivery path planning problem with anticipated feedback is presented. The decision model consists in minimizing expected entropy considering anticipated possible observation outcomes over a given time horizon. The model captures uncertainty associated with observation events for all possible scenarios. Entropy represents a measure of uncertainty about the searched target location. Feedback information resulting from possible sensor observations outcomes along the projected path plan is exploited to update anticipated unit target occupancy beliefs. For the first time, a compact belief update formulation is generalized to explicitly include false positive observation events that may occur during plan execution. A novel genetic algorithm is then proposed to efficiently solve search path planning, providing near-optimal solutions for practical realistic problem instances. Given the run-time performance of the algorithm, natural extension to a closed-loop environment to progressively integrate real visit outcomes on a rolling time horizon can be easily envisioned. Computational results show the value of the approach in comparison to alternate heuristics.
Keywords: Search path planning, false alarm, search-and-delivery, entropy, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967105 Implementing an Intuitive Reasoner with a Large Weather Database
Authors: Yung-Chien Sun, O. Grant Clark
Abstract:
In this paper, the implementation of a rule-based intuitive reasoner is presented. The implementation included two parts: the rule induction module and the intuitive reasoner. A large weather database was acquired as the data source. Twelve weather variables from those data were chosen as the “target variables" whose values were predicted by the intuitive reasoner. A “complex" situation was simulated by making only subsets of the data available to the rule induction module. As a result, the rules induced were based on incomplete information with variable levels of certainty. The certainty level was modeled by a metric called "Strength of Belief", which was assigned to each rule or datum as ancillary information about the confidence in its accuracy. Two techniques were employed to induce rules from the data subsets: decision tree and multi-polynomial regression, respectively for the discrete and the continuous type of target variables. The intuitive reasoner was tested for its ability to use the induced rules to predict the classes of the discrete target variables and the values of the continuous target variables. The intuitive reasoner implemented two types of reasoning: fast and broad where, by analogy to human thought, the former corresponds to fast decision making and the latter to deeper contemplation. . For reference, a weather data analysis approach which had been applied on similar tasks was adopted to analyze the complete database and create predictive models for the same 12 target variables. The values predicted by the intuitive reasoner and the reference approach were compared with actual data. The intuitive reasoner reached near-100% accuracy for two continuous target variables. For the discrete target variables, the intuitive reasoner predicted at least 70% as accurately as the reference reasoner. Since the intuitive reasoner operated on rules derived from only about 10% of the total data, it demonstrated the potential advantages in dealing with sparse data sets as compared with conventional methods.Keywords: Artificial intelligence, intuition, knowledge acquisition, limited certainty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1383104 Depth Camera Aided Dead-Reckoning Localization of Autonomous Mobile Robots in Unstructured Global Navigation Satellite System Denied Environments
Authors: David L. Olson, Stephen B. H. Bruder, Adam S. Watkins, Cleon E. Davis
Abstract:
In global navigation satellite system (GNSS) denied settings, such as indoor environments, autonomous mobile robots are often limited to dead-reckoning navigation techniques to determine their position, velocity, and attitude (PVA). Localization is typically accomplished by employing an inertial measurement unit (IMU), which, while precise in nature, accumulates errors rapidly and severely degrades the localization solution. Standard sensor fusion methods, such as Kalman filtering, aim to fuse precise IMU measurements with accurate aiding sensors to establish a precise and accurate solution. In indoor environments, where GNSS and no other a priori information is known about the environment, effective sensor fusion is difficult to achieve, as accurate aiding sensor choices are sparse. However, an opportunity arises by employing a depth camera in the indoor environment. A depth camera can capture point clouds of the surrounding floors and walls. Extracting attitude from these surfaces can serve as an accurate aiding source, which directly combats errors that arise due to gyroscope imperfections. This configuration for sensor fusion leads to a dramatic reduction of PVA error compared to traditional aiding sensor configurations. This paper provides the theoretical basis for the depth camera aiding sensor method, initial expectations of performance benefit via simulation, and hardware implementation thus verifying its veracity. Hardware implementation is performed on the Quanser Qbot 2™ mobile robot, with a Vector-Nav VN-200™ IMU and Kinect™ camera from Microsoft.
Keywords: Autonomous mobile robotics, dead reckoning, depth camera, inertial navigation, Kalman filtering, localization, sensor fusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 720