Search results for: Naturally-inspired algorithms and particle swarm optimization.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3541

Search results for: Naturally-inspired algorithms and particle swarm optimization.

181 Three Dimensional Modeling of Mixture Formation and Combustion in a Direct Injection Heavy-Duty Diesel Engine

Authors: A. R. Binesh, S. Hossainpour

Abstract:

Due to the stringent legislation for emission of diesel engines and also increasing demand on fuel consumption, the importance of detailed 3D simulation of fuel injection, mixing and combustion have been increased in the recent years. In the present work, FIRE code has been used to study the detailed modeling of spray and mixture formation in a Caterpillar heavy-duty diesel engine. The paper provides an overview of the submodels implemented, which account for liquid spray atomization, droplet secondary break-up, droplet collision, impingement, turbulent dispersion and evaporation. The simulation was performed from intake valve closing (IVC) to exhaust valve opening (EVO). The predicted in-cylinder pressure is validated by comparing with existing experimental data. A good agreement between the predicted and experimental values ensures the accuracy of the numerical predictions collected with the present work. Predictions of engine emissions were also performed and a good quantitative agreement between measured and predicted NOx and soot emission data were obtained with the use of the present Zeldowich mechanism and Hiroyasu model. In addition, the results reported in this paper illustrate that the numerical simulation can be one of the most powerful and beneficial tools for the internal combustion engine design, optimization and performance analysis.

Keywords: Diesel engine, Combustion, Pollution, CFD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1947
180 Optimal Sliding Mode Controller for Knee Flexion During Walking

Authors: Gabriel Sitler, Yousef Sardahi, Asad Salem

Abstract:

This paper presents an optimal and robust sliding mode controller (SMC) to regulate the position of the knee joint angle for patients suffering from knee injuries. The controller imitates the role of active orthoses that produce the joint torques required to overcome gravity and loading forces and regain natural human movements. To this end, a mathematical model of the shank, the lower part of the leg, is derived first and then used for the control system design and computer simulations. The design of the controller is carried out in optimal and multi-objective settings. Four objectives are considered: minimization of the control effort and tracking error; and maximization of the control signal smoothness and closed-loop system’s speed of response. Optimal solutions in terms of the Pareto set and its image, the Pareto front, are obtained. The results show that there are trade-offs among the design objectives and many optimal solutions from which the decision-maker can choose to implement. Also, computer simulations conducted at different points from the Pareto set and assuming knee squat movement demonstrate competing relationships among the design goals. In addition, the proposed control algorithm shows robustness in tracking a standard gait signal when accounting for uncertainty in the shank’s parameters.

Keywords: Optimal control, multi-objective optimization, sliding mode control, wearable knee exoskeletons.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 181
179 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights

Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan

Abstract:

The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.

Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 67
178 Profit Optimization for Solar Plant Electricity Production

Authors: Fl. Loury, P. Sablonière

Abstract:

In this paper a stochastic scenario-based model predictive control applied to molten salt storage systems in concentrated solar tower power plant is presented. The main goal of this study is to build up a tool to analyze current and expected future resources for evaluating the weekly power to be advertised on electricity secondary market. This tool will allow plant operator to maximize profits while hedging the impact on the system of stochastic variables such as resources or sunlight shortage.

Solving the problem first requires a mixed logic dynamic modeling of the plant. The two stochastic variables, respectively the sunlight incoming energy and electricity demands from secondary market, are modeled by least square regression. Robustness is achieved by drawing a certain number of random variables realizations and applying the most restrictive one to the system. This scenario approach control technique provides the plant operator a confidence interval containing a given percentage of possible stochastic variable realizations in such a way that robust control is always achieved within its bounds. The results obtained from many trajectory simulations show the existence of a ‘’reliable’’ interval, which experimentally confirms the algorithm robustness.

Keywords: Molten Salt Storage System, Concentrated Solar Tower Power Plant, Robust Stochastic Model Predictive Control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1926
177 Bone Mineral Density and Quality, Body Composition of Women in the Postmenopausal Period

Authors: Vladyslav Povoroznyuk, Oksana Ivanyk, Nataliia Dzerovych

Abstract:

In the diagnostics of osteoporosis, the gold standard is considered to be bone mineral density; however, X-ray densitometry is not an accurate indicator of osteoporotic fracture risk under all circumstances. In this regard, the search for new methods that could determine the indicators not only of the mineral density, but of the bone tissue quality, is a logical step for diagnostic optimization. One of these methods is the evaluation of trabecular bone quality. The aim of this study was to examine the quality and mineral density of spine bone tissue, femoral neck, and body composition of women depending on the duration of the postmenopausal period, to determine the correlation of body fat with indicators of bone mineral density and quality. The study examined 179 women in premenopausal and postmenopausal periods. The patients were divided into the following groups: Women in the premenopausal period and women in the postmenopausal period at various stages (early, middle, late postmenopause). A general examination and study of the above parameters were conducted with General Electric X-ray densitometer. The results show that bone quality and mineral density probably deteriorate with advancing of postmenopausal period. Total fat and lean mass ratio is not likely to change with age. In the middle and late postmenopausal periods, the bone tissue mineral density of the spine and femoral neck increases along with total fat mass.

Keywords: Osteoporosis, bone tissue mineral density, bone quality, fat mass, lean mass, postmenopausal osteoporosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 941
176 Aerodynamic Design Optimization of High-Speed Hatchback Cars for Lucrative Commercial Applications

Authors: A. Aravind, M. Vetrivel, P. Abhimanyu, C. A. Akaash Emmanuel Raj, K. Sundararaj, V. R. S. Kumar

Abstract:

The choice of high-speed, low budget hatchback car with diversified options is increasing for meeting the new generation buyers trend. This paper is aimed to augment the current speed of the hatchback cars through the aerodynamic drag reduction technique. The inverted airfoils are facilitated at the bottom of the car for generating the downward force for negating the lift while increasing the current speed range for achieving a better road performance. The numerical simulations have been carried out using a 2D steady pressure-based    k-ɛ realizable model with enhanced wall treatment. In our numerical studies, Reynolds-averaged Navier-Stokes model and its code of solution are used. The code is calibrated and validated using the exact solution of the 2D boundary layer displacement thickness at the Sanal flow choking condition for adiabatic flows. We observed through the parametric analytical studies that the inverted airfoil integrated with the bottom surface at various predesigned locations of Hatchback cars can improve its overall aerodynamic efficiency through drag reduction, which obviously decreases the fuel consumption significantly and ensure an optimum road performance lucratively with maximum permissible speed within the framework of the manufactures constraints.

Keywords: Aerodynamics of commercial cars, downward force, hatchback car, inverted airfoil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1621
175 Numerical Studies on Thrust Vectoring Using Shock Induced Supersonic Secondary Jet

Authors: Jerin John, Subanesh Shyam R., Aravind Kumar T. R., Naveen N., Vignesh R., Krishna Ganesh B, Sanal Kumar V. R.

Abstract:

Numerical studies have been carried out using a validated two-dimensional RNG k-epsilon turbulence model for the design optimization of a thrust vector control system using shock induced supersonic secondary jet. Parametric analytical studies have been carried out with various secondary jets at different divergent locations, jet interaction angles, jet pressures. The results from the parametric studies of the case on hand reveal that the primary nozzle with a small divergence angle, downstream injections with a distance of 2.5 times the primary nozzle throat diameter from the primary nozzle throat location warrant higher efficiency over a certain range of jet pressures and jet angles. We observed that the supersonic secondary jet opposing the core flow with jets interaction angle of 40o to the axis far downstream of the nozzle throat facilitates better thrust vectoring than the secondary jet with same direction as that of core flow with various interaction angles. We concluded that fixing of the supersonic secondary jet nozzle pointing towards the throat direction with suitable angle at a distance 2 to 4 times of the primary nozzle throat diameter, as the case may be, from the primary nozzle throat location could facilitate better thrust vectoring for the supersonic aerospace vehicles.

Keywords: Fluidic thrust vectoring, rocket steering, supersonic secondary jet location, TVC in spacecraft.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3657
174 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes

Authors: Sky Chou, Joseph C. Chen

Abstract:

This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.

Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381
173 Transmission Line Congestion Management Using Hybrid Fish-Bee Algorithm with Unified Power Flow Controller

Authors: P. Valsalal, S. Thangalakshmi

Abstract:

There is a widespread changeover in the electrical power industry universally from old-style monopolistic outline towards a horizontally distributed competitive structure to come across the demand of rising consumption. When the transmission lines of derestricted system are incapable to oblige the entire service needs, the lines are overloaded or congested. The governor between customer and power producer is nominated as Independent System Operator (ISO) to lessen the congestion without obstructing transmission line restrictions. Among the existing approaches for congestion management, the frequently used approaches are reorganizing the generation and load curbing. There is a boundary for reorganizing the generators, and further loads may not be supplemented with the prevailing resources unless more private power producers are added in the system by considerably raising the cost. Hence, congestion is relaxed by appropriate Flexible AC Transmission Systems (FACTS) devices which boost the existing transfer capacity of transmission lines. The FACTs device, namely, Unified Power Flow Controller (UPFC) is preferred, and the correct placement of UPFC is more vital and should be positioned in the highly congested line. Hence, the weak line is identified by using power flow performance index with the new objective function with proposed hybrid Fish – Bee algorithm. Further, the location of UPFC at appropriate line reduces the branch loading and minimizes the voltage deviation. The power transfer capacity of lines is determined with and without UPFC in the identified congested line of IEEE 30 bus structure and the simulated results are compared with prevailing algorithms. It is observed that the transfer capacity of existing line is increased with the presented algorithm and thus alleviating the congestion.

Keywords: Available line transfer capability, congestion management, FACTS device, hybrid fish-bee algorithm, ISO, UPFC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
172 A Model for Optimal Design of Mixed Renewable Warranty Policy for Non-Repairable Weibull Life Products under Conflict between Customer and Manufacturer Interests

Authors: Saleem Z. Ramadan

Abstract:

A model is presented to find the optimal design of the mixed renewable warranty policy for non-repairable Weibull life products. The optimal design considers the conflict of interests between the customer and the manufacturer: the customer interests are longer full rebate coverage period and longer total warranty coverage period, the manufacturer interests are lower warranty cost and lower risk. The design factors are full rebate and total warranty coverage periods. Results showed that mixed policy is better than full rebate policy in terms of risk and total warranty coverage period in all of the three bathtub regions. In addition, results showed that linear policy is better than mixed policy in infant mortality and constant failure regions while the mixed policy is better than linear policy in ageing region of the model. Furthermore, the results showed that using burn-in period for infant mortality products reduces warranty cost and risk.

Keywords: Reliability, Mixed warranty policy, Optimization, Weibull Distribution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1447
171 Sensor and Actuator Fault Detection in Connected Vehicles under a Packet Dropping Network

Authors: Z. Abdollahi Biron, P. Pisu

Abstract:

Connected vehicles are one of the promising technologies for future Intelligent Transportation Systems (ITS). A connected vehicle system is essentially a set of vehicles communicating through a network to exchange their information with each other and the infrastructure. Although this interconnection of the vehicles can be potentially beneficial in creating an efficient, sustainable, and green transportation system, a set of safety and reliability challenges come out with this technology. The first challenge arises from the information loss due to unreliable communication network which affects the control/management system of the individual vehicles and the overall system. Such scenario may lead to degraded or even unsafe operation which could be potentially catastrophic. Secondly, faulty sensors and actuators can affect the individual vehicle’s safe operation and in turn will create a potentially unsafe node in the vehicular network. Further, sending that faulty sensor information to other vehicles and failure in actuators may significantly affect the safe operation of the overall vehicular network. Therefore, it is of utmost importance to take these issues into consideration while designing the control/management algorithms of the individual vehicles as a part of connected vehicle system. In this paper, we consider a connected vehicle system under Co-operative Adaptive Cruise Control (CACC) and propose a fault diagnosis scheme that deals with these aforementioned challenges. Specifically, the conventional CACC algorithm is modified by adding a Kalman filter-based estimation algorithm to suppress the effect of lost information under unreliable network. Further, a sliding mode observer-based algorithm is used to improve the sensor reliability under faults. The effectiveness of the overall diagnostic scheme is verified via simulation studies.

Keywords: Fault diagnostics, communication network, connected vehicles, packet drop out, platoon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2002
170 A Review of the Characteristics and Optimization of Optical Properties of Zirconia Ceramics for Aesthetic Dental Restorations

Authors: R. A. Shahmiri, O. C. Standard, J. N. Hart, C. C. Sorrell

Abstract:

The ceramic yttria-stabilized tetragonal zirconia polycrystal (Y-TZP) has been used as a dental biomaterial for several decades. The strength and toughness of this material can be accounted for by its toughening mechanisms, which include transformation toughening, crack deflection, zone shielding, contact shielding, and crack bridging. Prevention of crack propagation is of critical importance in high-fatigue situations, such as those encountered in mastication and para-function. However, the poor translucence of Y-TZP in polycrystalline form is such that it may not meet the aesthetic requirements due to its white/grey appearance. To improve the optical properties of Y-TZP, more detailed study of the optical properties is required; in particular, precise evaluation of the refractive index, absorption coefficient, and scattering coefficient are necessary. The measurement of the optical parameters has been based on the assumption that light scattered from biological media is isotropically distributed over all angles. In fact, the optical behavior of real biological materials depends on the angular scattering of light due to the anisotropic nature of the materials. The purpose of the present work is to evaluate the optical properties (including color, opacity/translucence, scattering, and fluorescence) of zirconia dental ceramics and their control through modification of the chemical composition, phase composition, and surface microstructure.

Keywords: Optical properties, opacity/translucence, scattering, fluorescence, chemical composition, phase composition, surface microstructure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1512
169 Sperm Whale Signal Analysis: Comparison using the Auto Regressive model and the Daubechies 15 Wavelets Transform

Authors: Olivier Adam, Maciej Lopatka, Christophe Laplanche, Jean-François Motsch

Abstract:

This article presents the results using a parametric approach and a Wavelet Transform in analysing signals emitting from the sperm whale. The extraction of intrinsic characteristics of these unique signals emitted by marine mammals is still at present a difficult exercise for various reasons: firstly, it concerns non-stationary signals, and secondly, these signals are obstructed by interfering background noise. In this article, we compare the advantages and disadvantages of both methods: Auto Regressive models and Wavelet Transform. These approaches serve as an alternative to the commonly used estimators which are based on the Fourier Transform for which the hypotheses necessary for its application are in certain cases, not sufficiently proven. These modern approaches provide effective results particularly for the periodic tracking of the signal's characteristics and notably when the signal-to-noise ratio negatively effects signal tracking. Our objectives are twofold. Our first goal is to identify the animal through its acoustic signature. This includes recognition of the marine mammal species and ultimately of the individual animal (within the species). The second is much more ambitious and directly involves the intervention of cetologists to study the sounds emitted by marine mammals in an effort to characterize their behaviour. We are working on an approach based on the recordings of marine mammal signals and the findings from this data result from the Wavelet Transform. This article will explore the reasons for using this approach. In addition, thanks to the use of new processors, these algorithms once heavy in calculation time can be integrated in a real-time system.

Keywords: Autoregressive model, Daubechies Wavelet, Fourier Transform, marine mammals, signal processing, spectrogram, sperm whale, Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2005
168 Gluten-Free Cookies Enriched with Blueberry Pomace: Optimization of Baking Process

Authors: Aleksandra Mišan, Bojana Šarić, Nataša Nedeljković, Mladenka Pestorić, Pavle Jovanov, Milica Pojić, Jelena Tomić, Bojana Filipčev, Miroslav Hadnađev, Anamarija Mandić

Abstract:

With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.

Keywords: Baking process, blueberry pomace, gluten-free cookies, Response Surface Methodology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2575
167 Efficiency Based Model for Solar Urban Planning

Authors: Amado, M. P., Amado, A., Poggi, F., Correia de Freitas, J.

Abstract:

Today is widely understood that global energy consumption patterns are directly related to the urban expansion and development process. This expansion is based on the natural growth of human activities and has left most urban areas totally dependent on fossil fuel derived external energy inputs. This status-quo of production, transportation, storage and consumption of energy has become inefficient and is set to become even more so when the continuous increases in energy demand are factored in. The territorial management of land use and related activities is a central component in the search for more efficient models of energy use, models that can meet current and future regional, national and European goals.

In this paper a methodology is developed and discussed with the aim of improving energy efficiency at the municipal level. The development of this methodology is based on the monitoring of energy consumption and its use patterns resulting from the natural dynamism of human activities in the territory and can be utilized to assess sustainability at the local scale. A set of parameters and indicators are defined with the objective of constructing a systemic model based on the optimization, adaptation and innovation of the current energy framework and the associated energy consumption patterns. The use of the model will enable local governments to strike the necessary balance between human activities and economic development and the local and global environment while safeguarding fairness in the energy sector.

Keywords: Solar urban planning, solar smart city, urban development, energy efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
166 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology

Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad

Abstract:

This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.

Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 597
165 Selective Encryption using ISMA Cryp in Real Time Video Streaming of H.264/AVC for DVB-H Application

Authors: Jay M. Joshi, Upena D. Dalal

Abstract:

Multimedia information availability has increased dramatically with the advent of video broadcasting on handheld devices. But with this availability comes problems of maintaining the security of information that is displayed in public. ISMA Encryption and Authentication (ISMACryp) is one of the chosen technologies for service protection in DVB-H (Digital Video Broadcasting- Handheld), the TV system for portable handheld devices. The ISMACryp is encoded with H.264/AVC (advanced video coding), while leaving all structural data as it is. Two modes of ISMACryp are available; the CTR mode (Counter type) and CBC mode (Cipher Block Chaining) mode. Both modes of ISMACryp are based on 128- bit AES algorithm. AES algorithms are more complex and require larger time for execution which is not suitable for real time application like live TV. The proposed system aims to gain a deep understanding of video data security on multimedia technologies and to provide security for real time video applications using selective encryption for H.264/AVC. Five level of security proposed in this paper based on the content of NAL unit in Baseline Constrain profile of H.264/AVC. The selective encryption in different levels provides encryption of intra-prediction mode, residue data, inter-prediction mode or motion vectors only. Experimental results shown in this paper described that fifth level which is ISMACryp provide higher level of security with more encryption time and the one level provide lower level of security by encrypting only motion vectors with lower execution time without compromise on compression and quality of visual content. This encryption scheme with compression process with low cost, and keeps the file format unchanged with some direct operations supported. Simulation was being carried out in Matlab.

Keywords: AES-128, CAVLC, H.264, ISMACryp

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
164 Discontinuous Spacetime with Vacuum Holes as Explanation for Gravitation, Quantum Mechanics and Teleportation

Authors: Constantin Z. Leshan

Abstract:

Hole Vacuum theory is based on discontinuous spacetime that contains vacuum holes. Vacuum holes can explain gravitation, some laws of quantum mechanics and allow teleportation of matter. All massive bodies emit a flux of holes which curve the spacetime; if we increase the concentration of holes, it leads to length contraction and time dilation because the holes do not have the properties of extension and duration. In the limited case when space consists of holes only, the distance between every two points is equal to zero and time stops - outside of the Universe, the extension and duration properties do not exist. For this reason, the vacuum hole is the only particle in physics capable of describing gravitation using its own properties only. All microscopic particles must 'jump' continually and 'vibrate' due to the appearance of holes (impassable microscopic 'walls' in space), and it is the cause of the quantum behavior. Vacuum holes can explain the entanglement, non-locality, wave properties of matter, tunneling, uncertainty principle and so on. Particles do not have trajectories because spacetime is discontinuous and has impassable microscopic 'walls' due to the simple mechanical motion is impossible at small scale distances; it is impossible to 'trace' a straight line in the discontinuous spacetime because it contains the impassable holes. Spacetime 'boils' continually due to the appearance of the vacuum holes. For teleportation to be possible, we must send a body outside of the Universe by enveloping it with a closed surface consisting of vacuum holes. Since a material body cannot exist outside of the Universe, it reappears instantaneously in a random point of the Universe. Since a body disappears in one volume and reappears in another random volume without traversing the physical space between them, such a transportation method can be called teleportation (or Hole Teleportation). It is shown that Hole Teleportation does not violate causality and special relativity due to its random nature and other properties. Although Hole Teleportation has a random nature, it can be used for colonization of extrasolar planets by the help of the method called 'random jumps': after a large number of random teleportation jumps, there is a probability that the spaceship may appear near a habitable planet. We can create vacuum holes experimentally using the method proposed by Descartes: we must remove a body from the vessel without permitting another body to occupy this volume.

Keywords: Border of the universe, causality violation, perfect isolation, quantum jumps.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1233
163 Machine Learning Techniques in Bank Credit Analysis

Authors: Fernanda M. Assef, Maria Teresinha A. Steiner

Abstract:

The aim of this paper is to compare and discuss better classifier algorithm options for credit risk assessment by applying different Machine Learning techniques. Using records from a Brazilian financial institution, this study uses a database of 5,432 companies that are clients of the bank, where 2,600 clients are classified as non-defaulters, 1,551 are classified as defaulters and 1,281 are temporarily defaulters, meaning that the clients are overdue on their payments for up 180 days. For each case, a total of 15 attributes was considered for a one-against-all assessment using four different techniques: Artificial Neural Networks Multilayer Perceptron (ANN-MLP), Artificial Neural Networks Radial Basis Functions (ANN-RBF), Logistic Regression (LR) and finally Support Vector Machines (SVM). For each method, different parameters were analyzed in order to obtain different results when the best of each technique was compared. Initially the data were coded in thermometer code (numerical attributes) or dummy coding (for nominal attributes). The methods were then evaluated for each parameter and the best result of each technique was compared in terms of accuracy, false positives, false negatives, true positives and true negatives. This comparison showed that the best method, in terms of accuracy, was ANN-RBF (79.20% for non-defaulter classification, 97.74% for defaulters and 75.37% for the temporarily defaulter classification). However, the best accuracy does not always represent the best technique. For instance, on the classification of temporarily defaulters, this technique, in terms of false positives, was surpassed by SVM, which had the lowest rate (0.07%) of false positive classifications. All these intrinsic details are discussed considering the results found, and an overview of what was presented is shown in the conclusion of this study.

Keywords: Artificial Neural Networks, ANNs, classifier algorithms, credit risk assessment, logistic regression, machine learning, support vector machines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1281
162 CBIR Using Multi-Resolution Transform for Brain Tumour Detection and Stages Identification

Authors: H. Benjamin Fredrick David, R. Balasubramanian, A. Anbarasa Pandian

Abstract:

Image retrieval is the most interesting technique which is being used today in our digital world. CBIR, commonly expanded as Content Based Image Retrieval is an image processing technique which identifies the relevant images and retrieves them based on the patterns that are extracted from the digital images. In this paper, two research works have been presented using CBIR. The first work provides an automated and interactive approach to the analysis of CBIR techniques. CBIR works on the principle of supervised machine learning which involves feature selection followed by training and testing phase applied on a classifier in order to perform prediction. By using feature extraction, the image transforms such as Contourlet, Ridgelet and Shearlet could be utilized to retrieve the texture features from the images. The features extracted are used to train and build a classifier using the classification algorithms such as Naïve Bayes, K-Nearest Neighbour and Multi-class Support Vector Machine. Further the testing phase involves prediction which predicts the new input image using the trained classifier and label them from one of the four classes namely 1- Normal brain, 2- Benign tumour, 3- Malignant tumour and 4- Severe tumour. The second research work includes developing a tool which is used for tumour stage identification using the best feature extraction and classifier identified from the first work. Finally, the tool will be used to predict tumour stage and provide suggestions based on the stage of tumour identified by the system. This paper presents these two approaches which is a contribution to the medical field for giving better retrieval performance and for tumour stages identification.

Keywords: Brain tumour detection, content based image retrieval, classification of tumours, image retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775
161 Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method

Authors: Diana Gómez-Cano, J. C. Ochoa-Botero, Roberto Bernal Correa, Yhan Paul Arias

Abstract:

To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.

Keywords: Alternative raw materials, halogen light, recycled mortar, resources optimization, water absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 535
160 Integrated Wastewater Reuse Project of the Faculty of Sciences Ain Chock, Morocco

Authors: Nihad Chakri, Btissam El Amrani, Faouzi Berrada, Fouad Amraoui

Abstract:

In Morocco, water scarcity requires the exploitation of non-conventional resources. Rural areas are under-equipped with sanitation infrastructure, unlike urban areas. Decentralized and low-cost solutions could improve the quality of life of the population and the environment. In this context, the Faculty of Sciences Ain Chock (FSAC) has undertaken an integrated project to treat part of its wastewater using a decentralized compact system. The project will propose alternative solutions that are inexpensive and adapted to the context of peri-urban and rural areas in order to treat the wastewater generated and to use it for irrigation, watering and cleaning. For this purpose, several tests were carried out in the laboratory in order to develop a liquid waste treatment system optimized for local conditions. Based on the results obtained at laboratory scale of the different proposed scenarios, we designed and implemented a prototype of a mini wastewater treatment plant for the faculty. In this article, we will outline the steps of dimensioning, construction and monitoring of the mini-station in our faculty.

Keywords: Wastewater, purification, response methodology surfaces optimization, vertical filter, Moving Bed Biofilm Reactors, MBBR process, sizing, prototype, Faculty of Sciences Ain Chock, decentralized approach, mini wastewater treatment plant, reuse of treated wastewater reuse, irrigation, sustainable development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 256
159 Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA

Authors: Suguru Ochiai, Toru Yabuki, Yoshiki Yamaguchi, Yuetsu Kodama

Abstract:

In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.

Keywords: Connect6, pattern matching, game-tree reduction, hardware direct computation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973
158 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair

Authors: Seyedvahid Najafi, Viliam Makis

Abstract:

In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ. 

Keywords: Condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 584
157 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring

Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti

Abstract:

Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.

Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 708
156 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid algorithm, crowd simulation, mobile platform, Newtonian laws, virtual heritage.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1498
155 Effects of Process Parameters on the Yield of Oil from Coconut Fruit

Authors: Ndidi F. Amulu, Godian O. Mbah, Maxwel I. Onyiah, Callistus N. Ude

Abstract:

Analysis of the properties of coconut (Cocos nucifera) and its oil was evaluated in this work using standard analytical techniques. The analyses carried out include proximate composition of the fruit, extraction of oil from the fruit using different process parameters and physicochemical analysis of the extracted oil. The results showed the percentage (%) moisture, crude lipid, crude protein, ash and carbohydrate content of the coconut as 7.59, 55.15, 5.65, 7.35 and 19.51 respectively. The oil from the coconut fruit was odourless and yellowish liquid at room temperature (30oC). The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant differences (P<0.05) in the yield of oil from coconut flour. The oil yield ranged between 36.25%-49.83%. Lipid indices of the coconut oil indicated the acid value (AV) as 10.05Na0H/g of oil, free fatty acid (FFA) as 5.03%, saponification values (SV) as 183.26mgKOH-1g of oil, iodine value (IV) as 81.00 I2/g of oil, peroxide value (PV) as 5.00 ml/ g of oil and viscosity (V) as 0.002. A standard statistical package minitab version 16.0 program was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to generate various plots such as single effect plot, interactions effect plot and contour plot. The response or yield of oil from the coconut flour was used to develop a mathematical model that correlates the yield to the process variables studied. The maximum conditions obtained that gave the highest yield of coconut oil were leaching time of 2hrs, leaching temperature of 50oC and solute/solvent ratio of 0.05g/ml.

Keywords: Coconut, oil-extraction, optimization, physicochemical, proximate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2651
154 Modeling and Analysis of Adaptive Buffer Sharing Scheme for Consecutive Packet Loss Reduction in Broadband Networks

Authors: Sakshi Kausha, R.K Sharma

Abstract:

High speed networks provide realtime variable bit rate service with diversified traffic flow characteristics and quality requirements. The variable bit rate traffic has stringent delay and packet loss requirements. The burstiness of the correlated traffic makes dynamic buffer management highly desirable to satisfy the Quality of Service (QoS) requirements. This paper presents an algorithm for optimization of adaptive buffer allocation scheme for traffic based on loss of consecutive packets in data-stream and buffer occupancy level. Buffer is designed to allow the input traffic to be partitioned into different priority classes and based on the input traffic behavior it controls the threshold dynamically. This algorithm allows input packets to enter into buffer if its occupancy level is less than the threshold value for priority of that packet. The threshold is dynamically varied in runtime based on packet loss behavior. The simulation is run for two priority classes of the input traffic – realtime and non-realtime classes. The simulation results show that Adaptive Partial Buffer Sharing (ADPBS) has better performance than Static Partial Buffer Sharing (SPBS) and First In First Out (FIFO) queue under the same traffic conditions.

Keywords: Buffer Management, Consecutive packet loss, Quality-of-Service, Priority based packet discarding, partial buffersharing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1637
153 Geostatistical Analysis and Mapping of Groundlevel Ozone in a Medium Sized Urban Area

Authors: F. J. Moral García, P. Valiente González, F. López Rodríguez

Abstract:

Ground-level tropospheric ozone is one of the air pollutants of most concern. It is mainly produced by photochemical processes involving nitrogen oxides and volatile organic compounds in the lower parts of the atmosphere. Ozone levels become particularly high in regions close to high ozone precursor emissions and during summer, when stagnant meteorological conditions with high insolation and high temperatures are common. In this work, some results of a study about urban ozone distribution patterns in the city of Badajoz, which is the largest and most industrialized city in Extremadura region (southwest Spain) are shown. Fourteen sampling campaigns, at least one per month, were carried out to measure ambient air ozone concentrations, during periods that were selected according to favourable conditions to ozone production, using an automatic portable analyzer. Later, to evaluate the ozone distribution at the city, the measured ozone data were analyzed using geostatistical techniques. Thus, first, during the exploratory analysis of data, it was revealed that they were distributed normally, which is a desirable property for the subsequent stages of the geostatistical study. Secondly, during the structural analysis of data, theoretical spherical models provided the best fit for all monthly experimental variograms. The parameters of these variograms (sill, range and nugget) revealed that the maximum distance of spatial dependence is between 302-790 m and the variable, air ozone concentration, is not evenly distributed in reduced distances. Finally, predictive ozone maps were derived for all points of the experimental study area, by use of geostatistical algorithms (kriging). High prediction accuracy was obtained in all cases as cross-validation showed. Useful information for hazard assessment was also provided when probability maps, based on kriging interpolation and kriging standard deviation, were produced.

Keywords: Kriging, map, tropospheric ozone, variogram.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1869
152 Analytical Slope Stability Analysis Based on the Statistical Characterization of Soil Shear Strength

Authors: Bernardo C. P. Albuquerque, Darym J. F. Campos

Abstract:

Increasing our ability to solve complex engineering problems is directly related to the processing capacity of computers. By means of such equipments, one is able to fast and accurately run numerical algorithms. Besides the increasing interest in numerical simulations, probabilistic approaches are also of great importance. This way, statistical tools have shown their relevance to the modelling of practical engineering problems. In general, statistical approaches to such problems consider that the random variables involved follow a normal distribution. This assumption tends to provide incorrect results when skew data is present since normal distributions are symmetric about their means. Thus, in order to visualize and quantify this aspect, 9 statistical distributions (symmetric and skew) have been considered to model a hypothetical slope stability problem. The data modeled is the friction angle of a superficial soil in Brasilia, Brazil. Despite the apparent universality, the normal distribution did not qualify as the best fit. In the present effort, data obtained in consolidated-drained triaxial tests and saturated direct shear tests have been modeled and used to analytically derive the probability density function (PDF) of the safety factor of a hypothetical slope based on Mohr-Coulomb rupture criterion. Therefore, based on this analysis, it is possible to explicitly derive the failure probability considering the friction angle as a random variable. Furthermore, it is possible to compare the stability analysis when the friction angle is modelled as a Dagum distribution (distribution that presented the best fit to the histogram) and as a Normal distribution. This comparison leads to relevant differences when analyzed in light of the risk management.

Keywords: Statistical slope stability analysis, Skew distributions, Probability of failure, Functions of random variables.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1545