Search results for: and optimization of membership functions.
220 Advanced Stochastic Models for Partially Developed Speckle
Authors: Jihad S. Daba (Jean-Pierre Dubois), Philip Jreije
Abstract:
Speckled images arise when coherent microwave, optical, and acoustic imaging techniques are used to image an object, surface or scene. Examples of coherent imaging systems include synthetic aperture radar, laser imaging systems, imaging sonar systems, and medical ultrasound systems. Speckle noise is a form of object or target induced noise that results when the surface of the object is Rayleigh rough compared to the wavelength of the illuminating radiation. Detection and estimation in images corrupted by speckle noise is complicated by the nature of the noise and is not as straightforward as detection and estimation in additive noise. In this work, we derive stochastic models for speckle noise, with an emphasis on speckle as it arises in medical ultrasound images. The motivation for this work is the problem of segmentation and tissue classification using ultrasound imaging. Modeling of speckle in this context involves partially developed speckle model where an underlying Poisson point process modulates a Gram-Charlier series of Laguerre weighted exponential functions, resulting in a doubly stochastic filtered Poisson point process. The statistical distribution of partially developed speckle is derived in a closed canonical form. It is observed that as the mean number of scatterers in a resolution cell is increased, the probability density function approaches an exponential distribution. This is consistent with fully developed speckle noise as demonstrated by the Central Limit theorem.Keywords: Doubly stochastic filtered process, Poisson point process, segmentation, speckle, ultrasound
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1744219 Post Pandemic Mobility Analysis through Indexing and Sharding in MongoDB: Performance Optimization and Insights
Authors: Karan Vishavjit, Aakash Lakra, Shafaq Khan
Abstract:
The COVID-19 pandemic has pushed healthcare professionals to use big data analytics as a vital tool for tracking and evaluating the effects of contagious viruses. To effectively analyse huge datasets, efficient NoSQL databases are needed. The analysis of post-COVID-19 health and well-being outcomes and the evaluation of the effectiveness of government efforts during the pandemic is made possible by this research’s integration of several datasets, which cuts down on query processing time and creates predictive visual artifacts. We recommend applying sharding and indexing technologies to improve query effectiveness and scalability as the dataset expands. Effective data retrieval and analysis are made possible by spreading the datasets into a sharded database and doing indexing on individual shards. Analysis of connections between governmental activities, poverty levels, and post-pandemic wellbeing is the key goal. We want to evaluate the effectiveness of governmental initiatives to improve health and lower poverty levels. We will do this by utilising advanced data analysis and visualisations. The findings provide relevant data that support the advancement of UN sustainable objectives, future pandemic preparation, and evidence-based decision-making. This study shows how Big Data and NoSQL databases may be used to address problems with global health.
Keywords: COVID-19, big data, data analysis, indexing, NoSQL, sharding, scalability, poverty.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71218 Profit Optimization for Solar Plant Electricity Production
Authors: Fl. Loury, P. Sablonière
Abstract:
In this paper a stochastic scenario-based model predictive control applied to molten salt storage systems in concentrated solar tower power plant is presented. The main goal of this study is to build up a tool to analyze current and expected future resources for evaluating the weekly power to be advertised on electricity secondary market. This tool will allow plant operator to maximize profits while hedging the impact on the system of stochastic variables such as resources or sunlight shortage.
Solving the problem first requires a mixed logic dynamic modeling of the plant. The two stochastic variables, respectively the sunlight incoming energy and electricity demands from secondary market, are modeled by least square regression. Robustness is achieved by drawing a certain number of random variables realizations and applying the most restrictive one to the system. This scenario approach control technique provides the plant operator a confidence interval containing a given percentage of possible stochastic variable realizations in such a way that robust control is always achieved within its bounds. The results obtained from many trajectory simulations show the existence of a ‘’reliable’’ interval, which experimentally confirms the algorithm robustness.
Keywords: Molten Salt Storage System, Concentrated Solar Tower Power Plant, Robust Stochastic Model Predictive Control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1927217 Numerical Simulations of Acoustic Imaging in Hydrodynamic Tunnel with Model Adaptation and Boundary Layer Noise Reduction
Authors: Sylvain Amailland, Jean-Hugh Thomas, Charles Pézerat, Romuald Boucheron, Jean-Claude Pascal
Abstract:
The noise requirements for naval and research vessels have seen an increasing demand for quieter ships in order to fulfil current regulations and to reduce the effects on marine life. Hence, new methods dedicated to the characterization of propeller noise, which is the main source of noise in the far-field, are needed. The study of cavitating propellers in closed-section is interesting for analyzing hydrodynamic performance but could involve significant difficulties for hydroacoustic study, especially due to reverberation and boundary layer noise in the tunnel. The aim of this paper is to present a numerical methodology for the identification of hydroacoustic sources on marine propellers using hydrophone arrays in a large hydrodynamic tunnel. The main difficulties are linked to the reverberation of the tunnel and the boundary layer noise that strongly reduce the signal-to-noise ratio. In this paper it is proposed to estimate the reflection coefficients using an inverse method and some reference transfer functions measured in the tunnel. This approach allows to reduce the uncertainties of the propagation model used in the inverse problem. In order to reduce the boundary layer noise, a cleaning algorithm taking advantage of the low rank and sparse structure of the cross-spectrum matrices of the acoustic and the boundary layer noise is presented. This approach allows to recover the acoustic signal even well under the boundary layer noise. The improvement brought by this method is visible on acoustic maps resulting from beamforming and DAMAS algorithms.Keywords: Acoustic imaging, boundary layer noise denoising, inverse problems, model adaptation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 976216 Bone Mineral Density and Quality, Body Composition of Women in the Postmenopausal Period
Authors: Vladyslav Povoroznyuk, Oksana Ivanyk, Nataliia Dzerovych
Abstract:
In the diagnostics of osteoporosis, the gold standard is considered to be bone mineral density; however, X-ray densitometry is not an accurate indicator of osteoporotic fracture risk under all circumstances. In this regard, the search for new methods that could determine the indicators not only of the mineral density, but of the bone tissue quality, is a logical step for diagnostic optimization. One of these methods is the evaluation of trabecular bone quality. The aim of this study was to examine the quality and mineral density of spine bone tissue, femoral neck, and body composition of women depending on the duration of the postmenopausal period, to determine the correlation of body fat with indicators of bone mineral density and quality. The study examined 179 women in premenopausal and postmenopausal periods. The patients were divided into the following groups: Women in the premenopausal period and women in the postmenopausal period at various stages (early, middle, late postmenopause). A general examination and study of the above parameters were conducted with General Electric X-ray densitometer. The results show that bone quality and mineral density probably deteriorate with advancing of postmenopausal period. Total fat and lean mass ratio is not likely to change with age. In the middle and late postmenopausal periods, the bone tissue mineral density of the spine and femoral neck increases along with total fat mass.
Keywords: Osteoporosis, bone tissue mineral density, bone quality, fat mass, lean mass, postmenopausal osteoporosis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 942215 BeamGA Median: A Hybrid Heuristic Search Approach
Authors: Ghada Badr, Manar Hosny, Nuha Bintayyash, Eman Albilali, Souad Larabi Marie-Sainte
Abstract:
The median problem is significantly applied to derive the most reasonable rearrangement phylogenetic tree for many species. More specifically, the problem is concerned with finding a permutation that minimizes the sum of distances between itself and a set of three signed permutations. Genomes with equal number of genes but different order can be represented as permutations. In this paper, an algorithm, namely BeamGA median, is proposed that combines a heuristic search approach (local beam) as an initialization step to generate a number of solutions, and then a Genetic Algorithm (GA) is applied in order to refine the solutions, aiming to achieve a better median with the smallest possible reversal distance from the three original permutations. In this approach, any genome rearrangement distance can be applied. In this paper, we use the reversal distance. To the best of our knowledge, the proposed approach was not applied before for solving the median problem. Our approach considers true biological evolution scenario by applying the concept of common intervals during the GA optimization process. This allows us to imitate a true biological behavior and enhance genetic approach time convergence. We were able to handle permutations with a large number of genes, within an acceptable time performance and with same or better accuracy as compared to existing algorithms.Keywords: Median problem, phylogenetic tree, permutation, genetic algorithm, beam search, genome rearrangement distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 979214 Aerodynamic Design Optimization of High-Speed Hatchback Cars for Lucrative Commercial Applications
Authors: A. Aravind, M. Vetrivel, P. Abhimanyu, C. A. Akaash Emmanuel Raj, K. Sundararaj, V. R. S. Kumar
Abstract:
The choice of high-speed, low budget hatchback car with diversified options is increasing for meeting the new generation buyers trend. This paper is aimed to augment the current speed of the hatchback cars through the aerodynamic drag reduction technique. The inverted airfoils are facilitated at the bottom of the car for generating the downward force for negating the lift while increasing the current speed range for achieving a better road performance. The numerical simulations have been carried out using a 2D steady pressure-based k-ɛ realizable model with enhanced wall treatment. In our numerical studies, Reynolds-averaged Navier-Stokes model and its code of solution are used. The code is calibrated and validated using the exact solution of the 2D boundary layer displacement thickness at the Sanal flow choking condition for adiabatic flows. We observed through the parametric analytical studies that the inverted airfoil integrated with the bottom surface at various predesigned locations of Hatchback cars can improve its overall aerodynamic efficiency through drag reduction, which obviously decreases the fuel consumption significantly and ensure an optimum road performance lucratively with maximum permissible speed within the framework of the manufactures constraints.
Keywords: Aerodynamics of commercial cars, downward force, hatchback car, inverted airfoil.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624213 Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms
Authors: Javier Roca, Etienne Pugnaghi, Gaëtan Libert
Abstract:
We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.
Keywords: Intelligent problem encoding, multiobjective decision making, evolutionary computing, RCPSP, resource leveling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4197212 Changes of Poultry Meat Chemical Composition, in Relationship with Lighting Schedule
Authors: P. C. Boisteanu, M. G. Usturoi, Roxana Lazar, B. V. Avarvarei
Abstract:
The paper is included within the framework of a complex research program, which was initiated from the hypothesis arguing on the existence of a correlation between pineal indolic and peptide hormones and the somatic development rhythm, including thus the epithalamium-epiphysis complex involvement. At birds, pineal gland contains a circadian oscillator, playing a main role in the temporal organization of the cerebral functions. The secretion of pineal indolic hormones is characterized by a high endogenous rhythmic alternation, modulated by the light/darkness (L/D) succession and by temperature as well. The research has been carried out using 100 chicken broilers - “Ross" commercial hybrid, randomly allocated in two experimental batches: Lc batch, reared under a 12L/12D lighting schedule and Lexp batch, which was photic pinealectomised through continuous exposition to light (150 lux, 24 hours, 56 days). Chemical and physical features of the meat issued from breast fillet and thighs muscles have been studied, determining the dry matter, proteins, fat, collagen, salt content and pH value, as well. Besides the variations of meat chemical composition in relation with lighting schedule, other parameters have been studied: live weight dynamics, feed intake and somatic development degree. The achieved results became significant since chickens have 7 days of age, some variations of the studied parameters being registered, revealing that the pineal gland physiologic activity, in relation with the lighting schedule, could be interpreted through the monitoring of the somatic development technological parameters, usually studied within the chicken broilers rearing aviculture practice.Keywords: lighting schedule, physic-chemical characteristics ofmeat, pineal gland at birds.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1583211 Numerical Studies on Thrust Vectoring Using Shock Induced Supersonic Secondary Jet
Authors: Jerin John, Subanesh Shyam R., Aravind Kumar T. R., Naveen N., Vignesh R., Krishna Ganesh B, Sanal Kumar V. R.
Abstract:
Numerical studies have been carried out using a validated two-dimensional RNG k-epsilon turbulence model for the design optimization of a thrust vector control system using shock induced supersonic secondary jet. Parametric analytical studies have been carried out with various secondary jets at different divergent locations, jet interaction angles, jet pressures. The results from the parametric studies of the case on hand reveal that the primary nozzle with a small divergence angle, downstream injections with a distance of 2.5 times the primary nozzle throat diameter from the primary nozzle throat location warrant higher efficiency over a certain range of jet pressures and jet angles. We observed that the supersonic secondary jet opposing the core flow with jets interaction angle of 40o to the axis far downstream of the nozzle throat facilitates better thrust vectoring than the secondary jet with same direction as that of core flow with various interaction angles. We concluded that fixing of the supersonic secondary jet nozzle pointing towards the throat direction with suitable angle at a distance 2 to 4 times of the primary nozzle throat diameter, as the case may be, from the primary nozzle throat location could facilitate better thrust vectoring for the supersonic aerospace vehicles.
Keywords: Fluidic thrust vectoring, rocket steering, supersonic secondary jet location, TVC in spacecraft.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3658210 Six Sigma-Based Optimization of Shrinkage Accuracy in Injection Molding Processes
Authors: Sky Chou, Joseph C. Chen
Abstract:
This paper focuses on using six sigma methodologies to reach the desired shrinkage of a manufactured high-density polyurethane (HDPE) part produced by the injection molding machine. It presents a case study where the correct shrinkage is required to reduce or eliminate defects and to improve the process capability index Cp and Cpk for an injection molding process. To improve this process and keep the product within specifications, the six sigma methodology, design, measure, analyze, improve, and control (DMAIC) approach, was implemented in this study. The six sigma approach was paired with the Taguchi methodology to identify the optimized processing parameters that keep the shrinkage rate within the specifications by our customer. An L9 orthogonal array was applied in the Taguchi experimental design, with four controllable factors and one non-controllable/noise factor. The four controllable factors identified consist of the cooling time, melt temperature, holding time, and metering stroke. The noise factor is the difference between material brand 1 and material brand 2. After the confirmation run was completed, measurements verify that the new parameter settings are optimal. With the new settings, the process capability index has improved dramatically. The purpose of this study is to show that the six sigma and Taguchi methodology can be efficiently used to determine important factors that will improve the process capability index of the injection molding process.
Keywords: Injection molding, shrinkage, six sigma, Taguchi parameter design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388209 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, KL divergence, quickest change detection, time series data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994208 A Model for Optimal Design of Mixed Renewable Warranty Policy for Non-Repairable Weibull Life Products under Conflict between Customer and Manufacturer Interests
Authors: Saleem Z. Ramadan
Abstract:
A model is presented to find the optimal design of the mixed renewable warranty policy for non-repairable Weibull life products. The optimal design considers the conflict of interests between the customer and the manufacturer: the customer interests are longer full rebate coverage period and longer total warranty coverage period, the manufacturer interests are lower warranty cost and lower risk. The design factors are full rebate and total warranty coverage periods. Results showed that mixed policy is better than full rebate policy in terms of risk and total warranty coverage period in all of the three bathtub regions. In addition, results showed that linear policy is better than mixed policy in infant mortality and constant failure regions while the mixed policy is better than linear policy in ageing region of the model. Furthermore, the results showed that using burn-in period for infant mortality products reduces warranty cost and risk.Keywords: Reliability, Mixed warranty policy, Optimization, Weibull Distribution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450207 A Review of the Characteristics and Optimization of Optical Properties of Zirconia Ceramics for Aesthetic Dental Restorations
Authors: R. A. Shahmiri, O. C. Standard, J. N. Hart, C. C. Sorrell
Abstract:
The ceramic yttria-stabilized tetragonal zirconia polycrystal (Y-TZP) has been used as a dental biomaterial for several decades. The strength and toughness of this material can be accounted for by its toughening mechanisms, which include transformation toughening, crack deflection, zone shielding, contact shielding, and crack bridging. Prevention of crack propagation is of critical importance in high-fatigue situations, such as those encountered in mastication and para-function. However, the poor translucence of Y-TZP in polycrystalline form is such that it may not meet the aesthetic requirements due to its white/grey appearance. To improve the optical properties of Y-TZP, more detailed study of the optical properties is required; in particular, precise evaluation of the refractive index, absorption coefficient, and scattering coefficient are necessary. The measurement of the optical parameters has been based on the assumption that light scattered from biological media is isotropically distributed over all angles. In fact, the optical behavior of real biological materials depends on the angular scattering of light due to the anisotropic nature of the materials. The purpose of the present work is to evaluate the optical properties (including color, opacity/translucence, scattering, and fluorescence) of zirconia dental ceramics and their control through modification of the chemical composition, phase composition, and surface microstructure.Keywords: Optical properties, opacity/translucence, scattering, fluorescence, chemical composition, phase composition, surface microstructure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514206 Improving Fault Resilience and Reconstruction of Overlay Multicast Tree Using Leaving Time of Participants
Authors: Bhed Bahadur Bista
Abstract:
Network layer multicast, i.e. IP multicast, even after many years of research, development and standardization, is not deployed in large scale due to both technical (e.g. upgrading of routers) and political (e.g. policy making and negotiation) issues. Researchers looked for alternatives and proposed application/overlay multicast where multicast functions are handled by end hosts, not network layer routers. Member hosts wishing to receive multicast data form a multicast delivery tree. The intermediate hosts in the tree act as routers also, i.e. they forward data to the lower hosts in the tree. Unlike IP multicast, where a router cannot leave the tree until all members below it leave, in overlay multicast any member can leave the tree at any time thus disjoining the tree and disrupting the data dissemination. All the disrupted hosts have to rejoin the tree. This characteristic of the overlay multicast causes multicast tree unstable, data loss and rejoin overhead. In this paper, we propose that each node sets its leaving time from the tree and sends join request to a number of nodes in the tree. The nodes in the tree will reject the request if their leaving time is earlier than the requesting node otherwise they will accept the request. The node can join at one of the accepting nodes. This makes the tree more stable as the nodes will join the tree according to their leaving time, earliest leaving time node being at the leaf of the tree. Some intermediate nodes may not follow their leaving time and leave earlier than their leaving time thus disrupting the tree. For this, we propose a proactive recovery mechanism so that disrupted nodes can rejoin the tree at predetermined nodes immediately. We have shown by simulation that there is less overhead when joining the multicast tree and the recovery time of the disrupted nodes is much less than the previous works. KeywordsKeywords: Network layer multicast, Fault Resilience, IP multicast
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1389205 Improved Segmentation of Speckled Images Using an Arithmetic-to-Geometric Mean Ratio Kernel
Abstract:
In this work, we improve a previously developed segmentation scheme aimed at extracting edge information from speckled images using a maximum likelihood edge detector. The scheme was based on finding a threshold for the probability density function of a new kernel defined as the arithmetic mean-to-geometric mean ratio field over a circular neighborhood set and, in a general context, is founded on a likelihood random field model (LRFM). The segmentation algorithm was applied to discriminated speckle areas obtained using simple elliptic discriminant functions based on measures of the signal-to-noise ratio with fractional order moments. A rigorous stochastic analysis was used to derive an exact expression for the cumulative density function of the probability density function of the random field. Based on this, an accurate probability of error was derived and the performance of the scheme was analysed. The improved segmentation scheme performed well for both simulated and real images and showed superior results to those previously obtained using the original LRFM scheme and standard edge detection methods. In particular, the false alarm probability was markedly lower than that of the original LRFM method with oversegmentation artifacts virtually eliminated. The importance of this work lies in the development of a stochastic-based segmentation, allowing an accurate quantification of the probability of false detection. Non visual quantification and misclassification in medical ultrasound speckled images is relatively new and is of interest to clinicians.Keywords: Discriminant function, false alarm, segmentation, signal-to-noise ratio, skewness, speckle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656204 Numerical Study of Bubbling Fluidized Beds Operating at Sub-atmospheric Conditions
Authors: Lanka Dinushke Weerasiri, Subrat Das, Daniel Fabijanic, William Yang
Abstract:
Fluidization at vacuum pressure has been a topic that is of growing research interest. Several industrial applications (such as drying, extractive metallurgy, and chemical vapor deposition (CVD)) can potentially take advantage of vacuum pressure fluidization. Particularly, the fine chemical industry requires processing under safe conditions for thermolabile substances, and reduced pressure fluidized beds offer an alternative. Fluidized beds under vacuum conditions provide optimal conditions for treatment of granular materials where the reduced gas pressure maintains an operational environment outside of flammability conditions. The fluidization at low-pressure is markedly different from the usual gas flow patterns of atmospheric fluidization. The different flow regimes can be characterized by the dimensionless Knudsen number. Nevertheless, hydrodynamics of bubbling vacuum fluidized beds has not been investigated to author’s best knowledge. In this work, the two-fluid numerical method was used to determine the impact of reduced pressure on the fundamental properties of a fluidized bed. The slip flow model implemented by Ansys Fluent User Defined Functions (UDF) was used to determine the interphase momentum exchange coefficient. A wide range of operating pressures was investigated (1.01, 0.5, 0.25, 0.1 and 0.03 Bar). The gas was supplied by a uniform inlet at 1.5Umf and 2Umf. The predicted minimum fluidization velocity (Umf) shows excellent agreement with the experimental data. The results show that the operating pressure has a notable impact on the bed properties and its hydrodynamics. Furthermore, it also shows that the existing Gorosko correlation that predicts bed expansion is not applicable under reduced pressure conditions.
Keywords: Computational fluid dynamics, fluidized bed, gas-solid flow, vacuum pressure, slip flow, minimum fluidization velocity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 775203 Gluten-Free Cookies Enriched with Blueberry Pomace: Optimization of Baking Process
Authors: Aleksandra Mišan, Bojana Šarić, Nataša Nedeljković, Mladenka Pestorić, Pavle Jovanov, Milica Pojić, Jelena Tomić, Bojana Filipčev, Miroslav Hadnađev, Anamarija Mandić
Abstract:
With the aim of improving nutritional profile and antioxidant capacity of gluten-free cookies, blueberry pomace, by-product of juice production, was processed into a new food ingredient by drying and grinding and used for a gluten-free cookie formulation. Since the quality of a baked product is highly influenced by the baking conditions, the objective of this work was to optimize the baking time and thickness of dough pieces, by applying Response Surface Methodology (RSM) in order to obtain the best technological quality of the cookies. The experiments were carried out according to a Central Composite Design (CCD) by selecting the dough thickness and baking time as independent variables, while hardness, color parameters (L*, a* and b* values), water activity, diameter and short/long ratio were response variables. According to the results of RSM analysis, the baking time of 13.74min and dough thickness of 4.08mm was found to be the optimal for the baking temperature of 170°C. As similar optimal parameters were obtained by previously conducted experiment based on sensory analysis, response surface methodology (RSM) can be considered as a suitable approach to optimize the baking process.
Keywords: Baking process, blueberry pomace, gluten-free cookies, Response Surface Methodology.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2577202 Efficiency Based Model for Solar Urban Planning
Authors: Amado, M. P., Amado, A., Poggi, F., Correia de Freitas, J.
Abstract:
Today is widely understood that global energy consumption patterns are directly related to the urban expansion and development process. This expansion is based on the natural growth of human activities and has left most urban areas totally dependent on fossil fuel derived external energy inputs. This status-quo of production, transportation, storage and consumption of energy has become inefficient and is set to become even more so when the continuous increases in energy demand are factored in. The territorial management of land use and related activities is a central component in the search for more efficient models of energy use, models that can meet current and future regional, national and European goals.
In this paper a methodology is developed and discussed with the aim of improving energy efficiency at the municipal level. The development of this methodology is based on the monitoring of energy consumption and its use patterns resulting from the natural dynamism of human activities in the territory and can be utilized to assess sustainability at the local scale. A set of parameters and indicators are defined with the objective of constructing a systemic model based on the optimization, adaptation and innovation of the current energy framework and the associated energy consumption patterns. The use of the model will enable local governments to strike the necessary balance between human activities and economic development and the local and global environment while safeguarding fairness in the energy sector.
Keywords: Solar urban planning, solar smart city, urban development, energy efficiency.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1963201 An Educational Application of Online Games for Learning Difficulties
Authors: M. Margoudi, Z. Smyrnaiou
Abstract:
The current paper presents the results of a conducted case study. During the past few years the number of children diagnosed with Learning Difficulties has drastically augmented and especially the cases of ADHD (Attention Deficit Hyperactivity Disorder). One of the core characteristics of ADHD is a deficit in working memory functions. The review of the literature indicates a plethora of educational software that aim at training and enhancing the working memory. Nevertheless, in the current paper, the possibility of using for the same purpose free, online games will be explored. Another issue of interest is the potential effect of the working memory training to the core symptoms of ADHD. In order to explore the abovementioned research questions, three digital tests are employed, all of which are developed on the E-slate platform by the author, in order to check the levels of ADHD’s symptoms and to be used as diagnostic tools, both in the beginning and in the end of the case study. The tools used during the main intervention of the research are free online games for the training of working memory. The research and the data analysis focus on the following axes: a) the presence and the possible change in two of the core symptoms of ADHD, attention and impulsivity and b) a possible change in the general cognitive abilities of the individual. The case study was conducted with the participation of a thirteen year-old, female student, diagnosed with ADHD, during after-school hours. The results of the study indicate positive changes both in the levels of attention and impulsivity. Therefore, we conclude that the training of working memory through the use of free, online games has a positive impact on the characteristics of ADHD. Finally, concerning the second research question, the change in general cognitive abilities, no significant changes were noted.Keywords: ADHD, attention, impulsivity, online games.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1866200 An Algorithm Proposed for FIR Filter Coefficients Representation
Authors: Mohamed Al Mahdi Eshtawie, Masuri Bin Othman
Abstract:
Finite impulse response (FIR) filters have the advantage of linear phase, guaranteed stability, fewer finite precision errors, and efficient implementation. In contrast, they have a major disadvantage of high order need (more coefficients) than IIR counterpart with comparable performance. The high order demand imposes more hardware requirements, arithmetic operations, area usage, and power consumption when designing and fabricating the filter. Therefore, minimizing or reducing these parameters, is a major goal or target in digital filter design task. This paper presents an algorithm proposed for modifying values and the number of non-zero coefficients used to represent the FIR digital pulse shaping filter response. With this algorithm, the FIR filter frequency and phase response can be represented with a minimum number of non-zero coefficients. Therefore, reducing the arithmetic complexity needed to get the filter output. Consequently, the system characteristic i.e. power consumption, area usage, and processing time are also reduced. The proposed algorithm is more powerful when integrated with multiplierless algorithms such as distributed arithmetic (DA) in designing high order digital FIR filters. Here the DA usage eliminates the need for multipliers when implementing the multiply and accumulate unit (MAC) and the proposed algorithm will reduce the number of adders and addition operations needed through the minimization of the non-zero values coefficients to get the filter output.
Keywords: Pulse shaping Filter, Distributed Arithmetic, Optimization algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3177199 Optimization of the Dental Direct Digital Imaging by Applying the Self-Recognition Technology
Authors: Mina Dabirinezhad, Mohsen Bayat Pour, Amin Dabirinejad
Abstract:
This paper is intended to introduce the technology to solve some of the deficiencies of the direct digital radiology. Nowadays, digital radiology is the latest progression in dental imaging, which has become an essential part of dentistry. There are two main parts of the direct digital radiology comprised of an intraoral X-ray machine and a sensor (digital image receptor). The dentists and the dental nurses experience afflictions during the taking image process by the direct digital X-ray machine. For instance, sometimes they need to readjust the sensor in the mouth of the patient to take the X-ray image again due to the low quality of that. Another problem is, the position of the sensor may move in the mouth of the patient and it triggers off an inappropriate image for the dentists. It means that it is a time-consuming process for dentists or dental nurses. On the other hand, taking several the X-ray images brings some problems for the patient such as being harmful to their health and feeling pain in their mouth due to the pressure of the sensor to the jaw. The author provides a technology to solve the above-mentioned issues that is called “Self-Recognition Direct Digital Radiology” (SDDR). This technology is based on the principle that the intraoral X-ray machine is capable to diagnose the location of the sensor in the mouth of the patient automatically. In addition, to solve the aforementioned problems, SDDR technology brings out fewer environmental impacts in comparison to the previous version.
Keywords: Dental direct digital imaging, digital image receptor, digital x-ray machine, and environmental impacts.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 599198 A Qualitative Evidence of the Markedness of Code Switching during Commercial Bank Service Encounters in Ìbàdàn Metropolis
Authors: A. Robbin
Abstract:
In a multilingual setting like Nigeria, the success of service encounters is enhanced by the use of a language that ensures the linguistic and persuasive demands of the interlocutors. This study examined motivations for code switching as a negotiation strategy in bank-hall desk service encounters in Ìbàdàn metropolis using Myers-Scotton’s exploration on markedness in language use. The data consisted of transcribed audio recording of bank-hall service encounters, and direct observation of bank interactions in two purposively sampled commercial banks in Ìbàdàn metropolis. The data was subjected to descriptive linguistic analysis using Myers Scotton’s Markedness Model. Findings reveal that code switching is frequently employed during different stages of service encounter: greeting, transaction and closing to fulfil relational, bargaining and referential functions. Bank staff and customers code switch to make unmarked, marked and explanatory choices. A strategy used to identify with customer’s cultural affiliation, close status gap, and appeal to begrudged customer; or as an explanatory choice with non-literate customers for ease of communication. Bankers select English to maintain customers’ perceptions of prestige which is retained or diverged from depending on their linguistic preference or ability. Yoruba is seen as an efficient negotiation strategy with both bankers and their customers, making choices within conversation to achieve desired conversational and functional aims.
Keywords: Markedness, bilingualism, code switching, service encounter, banking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 702197 Comparison of the Music Sound System between Thailand and Vietnam
Authors: Sansanee Jasuwan
Abstract:
Thai and Vietnamese music had been influenced and inspired by the traditional Chinese music. Whereby the differences of the tuning systems as well as the music modes are obviously known . The research examined the character of musical instruments, songs and culture between Thai and Vietnamese. An analyzing of songs and modes and the study of tone vibration as well as timbre had been done accurately. This qualitative research is based on documentary and songs analysis, field study, interviews and focus group discussion of Thai and Vietnamese masters. The research aims are to examine the musical instruments and songs of both Thai and Vietnamese as well as the comparison of the sounding system between Thailand and Vietnam. The finding of the research has revealed that there are similarities in certain kinds of instruments but differences in the sound systems regarding songs and scale of Thailand and Vietnam. Both cultural musical instruments are diverse and synthetic combining native and foreign inspiring. An integral part of Vietnam has been highly impacted by Chinese musical convention. Korea, Mongolia and Japan music have also play an active and effectively influenced as their geographical related. Whereas Thailand has been influenced by Chinese and Indian traditional music. Both Thai and Vietnamese musical instruments can be divided into four groups: plucked strings, bowed strings, winds and percussion. Songs from both countries have their own characteristics. They are playing a role in touching people heart in ceremonies, social functions and an essential element of the native performing arts. The Vietnamese music melodies have been influenced by Chinese music and taken the same character as Chinese songs. Thai song has specific identity and variety showed in its unique melody. Pentatonic scales have effectively been used in composing Thai and Vietnamese songs, but in different implementing concept.
Keywords: Music sound system, Thailand, Vietnam.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4386196 ZMP Based Reference Generation for Biped Walking Robots
Authors: Kemalettin Erbatur, Özer Koca, Evrim Taşkıran, Metin Yılmaz, Utku Seven
Abstract:
Recent fifteen years witnessed fast improvements in the field of humanoid robotics. The human-like robot structure is more suitable to human environment with its supreme obstacle avoidance properties when compared with wheeled service robots. However, the walking control for bipedal robots is a challenging task due to their complex dynamics. Stable reference generation plays a very important role in control. Linear Inverted Pendulum Model (LIPM) and the Zero Moment Point (ZMP) criterion are applied in a number of studies for stable walking reference generation of biped walking robots. This paper follows this main approach too. We propose a natural and continuous ZMP reference trajectory for a stable and human-like walk. The ZMP reference trajectories move forward under the sole of the support foot when the robot body is supported by a single leg. Robot center of mass trajectory is obtained from predefined ZMP reference trajectories by a Fourier series approximation method. The Gibbs phenomenon problem common with Fourier approximations of discontinuous functions is avoided by employing continuous ZMP references. Also, these ZMP reference trajectories possess pre-assigned single and double support phases, which are very useful in experimental tuning work. The ZMP based reference generation strategy is tested via threedimensional full-dynamics simulations of a 12-degrees-of-freedom biped robot model. Simulation results indicate that the proposed reference trajectory generation technique is successful.Keywords: Biped robot, Linear Inverted Pendulum Model, Zero Moment Point, Fourier series approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1635195 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments
Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea
Abstract:
The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.
Keywords: Deep learning, data mining, gender predication, MOOCs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1365194 Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method
Authors: Diana Gómez-Cano, J. C. Ochoa-Botero, Roberto Bernal Correa, Yhan Paul Arias
Abstract:
To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.
Keywords: Alternative raw materials, halogen light, recycled mortar, resources optimization, water absorption.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 535193 Estimating Saturated Hydraulic Conductivity from Soil Physical Properties using Neural Networks Model
Authors: B. Ghanbarian-Alavijeh, A.M. Liaghat, S. Sohrabi
Abstract:
Saturated hydraulic conductivity is one of the soil hydraulic properties which is widely used in environmental studies especially subsurface ground water. Since, its direct measurement is time consuming and therefore costly, indirect methods such as pedotransfer functions have been developed based on multiple linear regression equations and neural networks model in order to estimate saturated hydraulic conductivity from readily available soil properties e.g. sand, silt, and clay contents, bulk density, and organic matter. The objective of this study was to develop neural networks (NNs) model to estimate saturated hydraulic conductivity from available parameters such as sand and clay contents, bulk density, van Genuchten retention model parameters (i.e. r θ , α , and n) as well as effective porosity. We used two methods to calculate effective porosity: : (1) eff s FC φ =θ -θ , and (2) inf φ =θ -θ eff s , in which s θ is saturated water content, FC θ is water content retained at -33 kPa matric potential, and inf θ is water content at the inflection point. Total of 311 soil samples from the UNSODA database was divided into three groups as 187 for the training, 62 for the validation (to avoid over training), and 62 for the test of NNs model. A commercial neural network toolbox of MATLAB software with a multi-layer perceptron model and back propagation algorithm were used for the training procedure. The statistical parameters such as correlation coefficient (R2), and mean square error (MSE) were also used to evaluate the developed NNs model. The best number of neurons in the middle layer of NNs model for methods (1) and (2) were calculated 44 and 6, respectively. The R2 and MSE values of the test phase were determined for method (1), 0.94 and 0.0016, and for method (2), 0.98 and 0.00065, respectively, which shows that method (2) estimates saturated hydraulic conductivity better than method (1).Keywords: Neural network, Saturated hydraulic conductivity, Soil physical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557192 Integrated Wastewater Reuse Project of the Faculty of Sciences Ain Chock, Morocco
Authors: Nihad Chakri, Btissam El Amrani, Faouzi Berrada, Fouad Amraoui
Abstract:
In Morocco, water scarcity requires the exploitation of non-conventional resources. Rural areas are under-equipped with sanitation infrastructure, unlike urban areas. Decentralized and low-cost solutions could improve the quality of life of the population and the environment. In this context, the Faculty of Sciences Ain Chock (FSAC) has undertaken an integrated project to treat part of its wastewater using a decentralized compact system. The project will propose alternative solutions that are inexpensive and adapted to the context of peri-urban and rural areas in order to treat the wastewater generated and to use it for irrigation, watering and cleaning. For this purpose, several tests were carried out in the laboratory in order to develop a liquid waste treatment system optimized for local conditions. Based on the results obtained at laboratory scale of the different proposed scenarios, we designed and implemented a prototype of a mini wastewater treatment plant for the faculty. In this article, we will outline the steps of dimensioning, construction and monitoring of the mini-station in our faculty.
Keywords: Wastewater, purification, response methodology surfaces optimization, vertical filter, Moving Bed Biofilm Reactors, MBBR process, sizing, prototype, Faculty of Sciences Ain Chock, decentralized approach, mini wastewater treatment plant, reuse of treated wastewater reuse, irrigation, sustainable development.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 267191 Comparison of Two Maintenance Policies for a Two-Unit Series System Considering General Repair
Authors: Seyedvahid Najafi, Viliam Makis
Abstract:
In recent years, maintenance optimization has attracted special attention due to the growth of industrial systems complexity. Maintenance costs are high for many systems, and preventive maintenance is effective when it increases operations' reliability and safety at a reduced cost. The novelty of this research is to consider general repair in the modeling of multi-unit series systems and solve the maintenance problem for such systems using the semi-Markov decision process (SMDP) framework. We propose an opportunistic maintenance policy for a series system composed of two main units. Unit 1, which is more expensive than unit 2, is subjected to condition monitoring, and its deterioration is modeled using a gamma process. Unit 1 hazard rate is estimated by the proportional hazards model (PHM), and two hazard rate control limits are considered as the thresholds of maintenance interventions for unit 1. Maintenance is performed on unit 2, considering an age control limit. The objective is to find the optimal control limits and minimize the long-run expected average cost per unit time. The proposed algorithm is applied to a numerical example to compare the effectiveness of the proposed policy (policy Ⅰ) with policy Ⅱ, which is similar to policy Ⅰ, but instead of general repair, replacement is performed. Results show that policy Ⅰ leads to lower average cost compared with policy Ⅱ.
Keywords: Condition-based maintenance, proportional hazards model, semi-Markov decision process, two-unit series systems.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 590