Search results for: battery grading algorithm
2006 Performance Comparison of Non-Binary RA and QC-LDPC Codes
Abstract:
Repeat–Accumulate (RA) codes are subclass of LDPC codes with fast encoder structures. In this paper, we consider a nonbinary extension of binary LDPC codes over GF(q) and construct a non-binary RA code and a non-binary QC-LDPC code over GF(2^4), we construct non-binary RA codes with linear encoding method and non-binary QC-LDPC codes with algebraic constructions method. And the BER performance of RA and QC-LDPC codes over GF(q) are compared with BP decoding and by simulation over the Additive White Gaussian Noise (AWGN) channels.Keywords: non-binary RA codes, QC-LDPC codes, performance comparison, BP algorithm
Procedia PDF Downloads 3762005 Adaptive Routing in NoC-Based Heterogeneous MPSoCs
Authors: M. K. Benhaoua, A. E. H. Benyamina, T. Djeradi, P. Boulet
Abstract:
In this paper, we propose adaptive routing that considers the routing of communications in order to optimize the overall performance. The routing technique uses a newly proposed Algorithm to route communications between the tasks. The routing we propose of the communications leads to a better optimization of several performance metrics (time and energy consumption). Experimental results show that the proposed routing approach provides significant performance improvements when compared to those using static routing.Keywords: multi-processor systems-on-chip (mpsocs), network-on-chip (noc), heterogeneous architectures, adaptive routin
Procedia PDF Downloads 3752004 Ultracapacitor State-of-Energy Monitoring System with On-Line Parameter Identification
Authors: N. Reichbach, A. Kuperman
Abstract:
The paper describes a design of a monitoring system for super capacitor packs in propulsion systems, allowing determining the instantaneous energy capacity under power loading. The system contains real-time recursive-least-squares identification mechanism, estimating the values of pack capacitance and equivalent series resistance. These values are required for accurate calculation of the state-of-energy.Keywords: real-time monitoring, RLS identification algorithm, state-of-energy, super capacitor
Procedia PDF Downloads 5352003 Performance Evaluation of Packet Scheduling with Channel Conditioning Aware Based on Wimax Networks
Authors: Elmabruk Laias, Abdalla M. Hanashi, Mohammed Alnas
Abstract:
Worldwide Interoperability for Microwave Access (WiMAX) became one of the most challenging issues, since it was responsible for distributing available resources of the network among all users this leaded to the demand of constructing and designing high efficient scheduling algorithms in order to improve the network utilization, to increase the network throughput, and to minimize the end-to-end delay. In this study, the proposed algorithm focuses on an efficient mechanism to serve non-real time traffic in congested networks by considering channel status.Keywords: WiMAX, Quality of Services (QoS), OPNE, Diff-Serv (DS).
Procedia PDF Downloads 2862002 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete
Authors: Laura Kupers, Julie Piérard, Niki Cauberg
Abstract:
The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency
Procedia PDF Downloads 4232001 Adaptive Beamforming with Steering Error and Mutual Coupling between Antenna Sensors
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Owing to close antenna spacing between antenna sensors within a compact space, a part of data in one antenna sensor would outflow to other antenna sensors when the antenna sensors in an antenna array operate simultaneously. This phenomenon is called mutual coupling effect (MCE). It has been shown that the performance of antenna array systems can be degraded when the antenna sensors are in close proximity. Especially, in a systems equipped with massive antenna sensors, the degradation of beamforming performance due to the MCE is significantly inevitable. Moreover, it has been shown that even a small angle error between the true direction angle of the desired signal and the steering angle deteriorates the effectiveness of an array beamforming system. However, the true direction vector of the desired signal may not be exactly known in some applications, e.g., the application in land mobile-cellular wireless systems. Therefore, it is worth developing robust techniques to deal with the problem due to the MCE and steering angle error for array beamforming systems. In this paper, we present an efficient technique for performing adaptive beamforming with robust capabilities against the MCE and the steering angle error. Only the data vector received by an antenna array is required by the proposed technique. By using the received array data vector, a correlation matrix is constructed to replace the original correlation matrix associated with the received array data vector. Then, the mutual coupling matrix due to the MCE on the antenna array is estimated through a recursive algorithm. An appropriate estimate of the direction angle of the desired signal can also be obtained during the recursive process. Based on the estimated mutual coupling matrix, the estimated direction angle, and the reconstructed correlation matrix, the proposed technique can effectively cure the performance degradation due to steering angle error and MCE. The novelty of the proposed technique is that the implementation procedure is very simple and the resulting adaptive beamforming performance is satisfactory. Simulation results show that the proposed technique provides much better beamforming performance without requiring complicated complexity as compared with the existing robust techniques.Keywords: adaptive beamforming, mutual coupling effect, recursive algorithm, steering angle error
Procedia PDF Downloads 3212000 Numerical Analysis of the Response of Thin Flexible Membranes to Free Surface Water Flow
Authors: Mahtab Makaremi Masouleh, Günter Wozniak
Abstract:
This work is part of a major research project concerning the design of a light temporary installable textile flood control structure. The motivation for this work is the great need of applying light structures for the protection of coastal areas from detrimental effects of rapid water runoff. The prime objective of the study is the numerical analysis of the interaction among free surface water flow and slender shaped pliable structures, playing a key role in safety performance of the intended system. First, the behavior of down scale membrane is examined under hydrostatic pressure by the Abaqus explicit solver, which is part of the finite element based commercially available SIMULIA software. Then the procedure to achieve a stable and convergent solution for strongly coupled media including fluids and structures is explained. A partitioned strategy is imposed to make both structures and fluids be discretized and solved with appropriate formulations and solvers. In this regard, finite element method is again selected to analyze the structural domain. Moreover, computational fluid dynamics algorithms are introduced for solutions in flow domains by means of a commercial package of Star CCM+. Likewise, SIMULIA co-simulation engine and an implicit coupling algorithm, which are available communication tools in commercial package of the Star CCM+, enable powerful transmission of data between two applied codes. This approach is discussed for two different cases and compared with available experimental records. In one case, the down scale membrane interacts with open channel flow, where the flow velocity increases with time. The second case illustrates, how the full scale flexible flood barrier behaves when a massive flotsam is accelerated towards it.Keywords: finite element formulation, finite volume algorithm, fluid-structure interaction, light pliable structure, VOF multiphase model
Procedia PDF Downloads 1861999 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter
Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai
Abstract:
Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking
Procedia PDF Downloads 4821998 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments
Authors: David X. Dong, Qingming Zhang, Meng Lu
Abstract:
Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.Keywords: optical sensor, regression model, nitrites, water quality
Procedia PDF Downloads 721997 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 3281996 Stressful Life Events and Their Influence on Childhood Obesity and Emotional Well-Being: Cross-Sectional Study
Authors: M. Rojo, M. Blanco, T. Lacruz, S. Solano, L. Beltran, M. Graell, A. R. Sepulveda.
Abstract:
There is an association between an early accumulation of Stressful Life Events (SLE) during childhood and various physical and psychological health complications. However, there are only a few studies on this topic in children and adolescents with overweight or obesity. The general aim of the study was to evaluate the accumulation and type of SLE in 200 children from 8 to 12 years old and analyze the relationship with their emotional well-being and weight status (obesity, overweight and normal weight). The children and their families completed an interview. The evaluated variables that are included in this study are sociodemographic measures, medical/psychological history, anthropometric measures (BMI, z-BMI), and psychological variables (children's clinical interview K-SADS-PL(Schedule for Affective Disorders and Schizophrenia for School-Age Children Present and Lifetime Version) and battery of questionnaires). Results: Children with overweight and obesity accumulate more stressful events from an early age and have a significantly higher percentage of psychiatric diagnoses, compared to their peers with normal weight. Presenting a child psychiatric disorder is related to greater z-BMI and the total number of SLE (p < 0.001). A higher z-BMI is also related to a greater number of stressful events during childhood. There is also a positive and significant relationship between the total number of SLE and worse emotional well-being (higher levels of anxious and depressive symptoms and low self-esteem of children) (p < 0.01). Conclusion: Children with overweight and obesity grow up in a family, school, and social context where more stressors are accumulated. This is also directly associated with worse emotional well-being. It is necessary to implement multidisciplinary prevention and intervention strategies in different changes (school, family, and health). This study is included in a project funded by the Ministry of Innovation and Science (PSI2011-23127).Keywords: childhood obesity, emotional well-being, psychopathology, stressful life events
Procedia PDF Downloads 1271995 A Case Study on Quantitatively and Qualitatively Increasing Student Output by Using Available Word Processing Applications to Teach Reluctant Elementary School-Age Writers
Authors: Vivienne Cameron
Abstract:
Background: Between 2010 and 2017, teachers in a suburban public school district struggled to get students to consistently produce adequate writing samples as measured by the Pennsylvania state writing rubric for measuring focus, content, organization, style, and conventions. A common thread in all of the data was the need to develop stamina in the student writers. Method: All of the teachers used the traditional writing process model (prewrite, draft, revise, edit, final copy) during writing instruction. One teacher taught the writing process using word processing and incentivizing with publication instead of the traditional pencil/paper/grading method. Students did not have instruction in typing/keyboarding. The teacher submitted resulting student work to real-life contests, magazines, and publishers. Results: Students in the test group increased both the quantity and quality of their writing over a seven month period as measured by the Pennsylvania state writing rubric. Reluctant writers, as well as students with autism spectrum disorder, benefited from this approach. This outcome was repeated consistently over a five-year period. Interpretation: Removing the burden of pencil and paper allowed students to participate in the writing process more fully. Writing with pencil and paper is physically tiring. Students are discouraged when they submit a draft and are instructed to use the Add, Remove, Move, Substitute (ARMS) method to revise their papers. Each successive version becomes shorter. Allowing students to type their papers frees them to quickly and easily make changes. The result is longer writing pieces in shorter time frames, allowing the teacher to spend more time working on individual needs. With this additional time, the teacher can concentrate on teaching focus, content, organization, style, conventions, and audience. S/he also has a larger body of works from which to work on whole group instruction such as developing effective leads. The teacher submitted the resulting student work to contests, magazines, and publishers. Although time-consuming, the submission process was an invaluable lesson for teaching about audience and tone. All students in the test sample had work accepted for publication. Students became highly motivated to succeed when their work was accepted for publication. This motivation applied to special needs students, regular education students, and gifted students.Keywords: elementary-age students, reluctant writers, teaching strategies, writing process
Procedia PDF Downloads 1751994 A Wearable Device to Overcome Post–Stroke Learned Non-Use; The Rehabilitation Gaming System for wearables: Methodology, Design and Usability
Authors: Javier De La Torre Costa, Belen Rubio Ballester, Martina Maier, Paul F. M. J. Verschure
Abstract:
After a stroke, a great number of patients experience persistent motor impairments such as hemiparesis or weakness in one entire side of the body. As a result, the lack of use of the paretic limb might be one of the main contributors to functional loss after clinical discharge. We aim to reverse this cycle by promoting the use of the paretic limb during activities of daily living (ADLs). To do so, we describe the key components of a system that is composed of a wearable bracelet (i.e., a smartwatch) and a mobile phone, designed to bring a set of neurorehabilitation principles that promote acquisition, retention and generalization of skills to the home of the patient. A fundamental question is whether the loss in motor function derived from learned–non–use may emerge as a consequence of decision–making processes for motor optimization. Our system is based on well-established rehabilitation strategies that aim to reverse this behaviour by increasing the reward associated with action execution as well as implicitly reducing the expected cost associated with the use of the paretic limb, following the notion of the reinforcement–induced movement therapy (RIMT). Here we validate an accelerometer–based measure of arm use, and its capacity to discriminate different activities that require increasing movement of the arm. We also show how the system can act as a personalized assistant by providing specific goals and adjusting them depending on the performance of the patients. The usability and acceptance of the device as a rehabilitation tool is tested using a battery of self–reported and objective measurements obtained from acute/subacute patients and healthy controls. We believe that an extension of these technologies will allow for the deployment of unsupervised rehabilitation paradigms during and beyond the hospitalization time.Keywords: stroke, wearables, learned non use, hemiparesis, ADLs
Procedia PDF Downloads 2171993 The Role of Cultural Expectations in Emotion Regulation among Nepali Adolescents
Authors: Martha Berg, Megan Ramaiya, Andi Schmidt, Susanna Sharma, Brandon Kohrt
Abstract:
Nepali adolescents report tension and negative emotion due to perceived expectations of both academic and social achievement. These societal goals, which are internalized through early-life socialization, drive the development of self-regulatory processes such as emotion regulation. Emotion dysregulation is linked with adverse psychological outcomes such as depression, self-harm, and suicide, which are public health concerns for organizations working with Nepali adolescents. This study examined the relation among socialization, internalized cultural goals, and emotion regulation to inform interventions for reducing depression and suicide in this population. Participants included 102 students in grades 7 through 9 in a post-earthquake school setting in rural Kathmandu valley. All participants completed a tablet-based battery of quantitative measures, comprising transculturally adapted assessments of emotion regulation, depression, and self-harm/suicide ideation and behavior. Qualitative measures included two focus groups and semi-structured interviews with 22 students and 3 parents. A notable proportion of the sample reported depression symptoms in the past 2 weeks (68%), lifetime self-harm ideation (28%), and lifetime suicide attempts (13%). Students who lived with their nuclear family reported lower levels of difficulty than those who lived with more distant relatives (z=2.16, p=.03), which suggests a link between family environment and adolescent emotion regulation, potentially mediated by socialization and internalization of cultural goals. These findings call for further research into the aspects of nuclear versus extended family environments that shape the development of emotion regulation.Keywords: adolescent mental health, emotion regulation, Nepal, socialization
Procedia PDF Downloads 2721992 Price Prediction Line, Investment Signals and Limit Conditions Applied for the German Financial Market
Authors: Cristian Păuna
Abstract:
In the first decades of the 21st century, in the electronic trading environment, algorithmic capital investments became the primary tool to make a profit by speculations in financial markets. A significant number of traders, private or institutional investors are participating in the capital markets every day using automated algorithms. The autonomous trading software is today a considerable part in the business intelligence system of any modern financial activity. The trading decisions and orders are made automatically by computers using different mathematical models. This paper will present one of these models called Price Prediction Line. A mathematical algorithm will be revealed to build a reliable trend line, which is the base for limit conditions and automated investment signals, the core for a computerized investment system. The paper will guide how to apply these tools to generate entry and exit investment signals, limit conditions to build a mathematical filter for the investment opportunities, and the methodology to integrate all of these in automated investment software. The paper will also present trading results obtained for the leading German financial market index with the presented methods to analyze and to compare different automated investment algorithms. It was found that a specific mathematical algorithm can be optimized and integrated into an automated trading system with good and sustained results for the leading German Market. Investment results will be compared in order to qualify the presented model. In conclusion, a 1:6.12 risk was obtained to reward ratio applying the trigonometric method to the DAX Deutscher Aktienindex on 24 months investment. These results are superior to those obtained with other similar models as this paper reveal. The general idea sustained by this paper is that the Price Prediction Line model presented is a reliable capital investment methodology that can be successfully applied to build an automated investment system with excellent results.Keywords: algorithmic trading, automated trading systems, high-frequency trading, DAX Deutscher Aktienindex
Procedia PDF Downloads 1301991 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 1361990 An Iberian Study about Location of Parking Areas for Dangerous Goods
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
When lorries transport dangerous goods, there exist some legal stipulations in the European Union for assuring the security of the rest of road users as well as of those goods being transported. At this respect, lorry drivers cannot park in usual parking areas, because they must use parking areas with special conditions, including permanent supervision of security personnel. Moreover, drivers are compelled to satisfy additional regulations about resting and driving times, which involve in the practical possibility of reaching the suitable parking areas under these time parameters. The “European Agreement concerning the International Carriage of Dangerous Goods by Road” (ADR) is the basic regulation on transportation of dangerous goods imposed under the recommendations of the United Nations Economic Commission for Europe. Indeed, nowadays there are no enough parking areas adapted for dangerous goods and no complete study have suggested the best locations to build new areas or to adapt others already existing to provide the areas being necessary so that lorry drivers can follow all the regulations. The goal of this paper is to show how many additional parking areas should be built in the Iberian Peninsula to allow that lorry drivers may park in such areas under their restrictions in resting and driving time. To do so, we have modeled the problem via graph theory and we have applied a new efficient algorithm which determines an optimal solution for the problem of locating new parking areas to complement those already existing in the ADR for the Iberian Peninsula. The solution can be considered minimal since the number of additional parking areas returned by the algorithm is minimal in quantity. Obviously, graph theory is a natural way to model and solve the problem here proposed because we have considered as nodes: the already-existing parking areas, the loading-and-unloading locations and the bifurcations of roads; while each edge between two nodes represents the existence of a road between both nodes (the distance between nodes is the edge's weight). Except for bifurcations, all the nodes correspond to parking areas already existing and, hence, the problem corresponds to determining the additional nodes in the graph such that there are less up to 100 km between two nodes representing parking areas. (maximal distance allowed by the European regulations).Keywords: dangerous goods, parking areas, Iberian peninsula, graph-based modeling
Procedia PDF Downloads 5801989 Kalman Filter for Bilinear Systems with Application
Authors: Abdullah E. Al-Mazrooei
Abstract:
In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.Keywords: bilinear systems, state space model, Kalman filter, application, models
Procedia PDF Downloads 4401988 Deterministic and Stochastic Modeling of a Micro-Grid Management for Optimal Power Self-Consumption
Authors: D. Calogine, O. Chau, S. Dotti, O. Ramiarinjanahary, P. Rasoavonjy, F. Tovondahiniriko
Abstract:
Mafate is a natural circus in the north-western part of Reunion Island, without an electrical grid and road network. A micro-grid concept is being experimented in this area, composed of a photovoltaic production combined with electrochemical batteries, in order to meet the local population for self-consumption of electricity demands. This work develops a discrete model as well as a stochastic model in order to reach an optimal equilibrium between production and consumptions for a cluster of houses. The management of the energy power leads to a large linearized programming system, where the time interval of interest is 24 hours The experimental data are solar production, storage energy, and the parameters of the different electrical devices and batteries. The unknown variables to evaluate are the consumptions of the various electrical services, the energy drawn from and stored in the batteries, and the inhabitants’ planning wishes. The objective is to fit the solar production to the electrical consumption of the inhabitants, with an optimal use of the energies in the batteries by satisfying as widely as possible the users' planning requirements. In the discrete model, the different parameters and solutions of the linear programming system are deterministic scalars. Whereas in the stochastic approach, the data parameters and the linear programming solutions become random variables, then the distributions of which could be imposed or established by estimation from samples of real observations or from samples of optimal discrete equilibrium solutions.Keywords: photovoltaic production, power consumption, battery storage resources, random variables, stochastic modeling, estimations of probability distributions, mixed integer linear programming, smart micro-grid, self-consumption of electricity.
Procedia PDF Downloads 1101987 Hybrid Energy System for the German Mining Industry: An Optimized Model
Authors: Kateryna Zharan, Jan C. Bongaerts
Abstract:
In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy
Procedia PDF Downloads 3431986 Automatic Identification of Pectoral Muscle
Authors: Ana L. M. Pavan, Guilherme Giacomini, Allan F. F. Alves, Marcela De Oliveira, Fernando A. B. Neto, Maria E. D. Rosa, Andre P. Trindade, Diana R. De Pina
Abstract:
Mammography is a worldwide image modality used to diagnose breast cancer, even in asymptomatic women. Due to its large availability, mammograms can be used to measure breast density and to predict cancer development. Women with increased mammographic density have a four- to sixfold increase in their risk of developing breast cancer. Therefore, studies have been made to accurately quantify mammographic breast density. In clinical routine, radiologists perform image evaluations through BIRADS (Breast Imaging Reporting and Data System) assessment. However, this method has inter and intraindividual variability. An automatic objective method to measure breast density could relieve radiologist’s workload by providing a first aid opinion. However, pectoral muscle is a high density tissue, with similar characteristics of fibroglandular tissues. It is consequently hard to automatically quantify mammographic breast density. Therefore, a pre-processing is needed to segment the pectoral muscle which may erroneously be quantified as fibroglandular tissue. The aim of this work was to develop an automatic algorithm to segment and extract pectoral muscle in digital mammograms. The database consisted of thirty medio-lateral oblique incidence digital mammography from São Paulo Medical School. This study was developed with ethical approval from the authors’ institutions and national review panels under protocol number 3720-2010. An algorithm was developed, in Matlab® platform, for the pre-processing of images. The algorithm uses image processing tools to automatically segment and extract the pectoral muscle of mammograms. Firstly, it was applied thresholding technique to remove non-biological information from image. Then, the Hough transform is applied, to find the limit of the pectoral muscle, followed by active contour method. Seed of active contour is applied in the limit of pectoral muscle found by Hough transform. An experienced radiologist also manually performed the pectoral muscle segmentation. Both methods, manual and automatic, were compared using the Jaccard index and Bland-Altman statistics. The comparison between manual and the developed automatic method presented a Jaccard similarity coefficient greater than 90% for all analyzed images, showing the efficiency and accuracy of segmentation of the proposed method. The Bland-Altman statistics compared both methods in relation to area (mm²) of segmented pectoral muscle. The statistic showed data within the 95% confidence interval, enhancing the accuracy of segmentation compared to the manual method. Thus, the method proved to be accurate and robust, segmenting rapidly and freely from intra and inter-observer variability. It is concluded that the proposed method may be used reliably to segment pectoral muscle in digital mammography in clinical routine. The segmentation of the pectoral muscle is very important for further quantifications of fibroglandular tissue volume present in the breast.Keywords: active contour, fibroglandular tissue, hough transform, pectoral muscle
Procedia PDF Downloads 3501985 Two-Level Separation of High Air Conditioner Consumers and Demand Response Potential Estimation Based on Set Point Change
Authors: Mehdi Naserian, Mohammad Jooshaki, Mahmud Fotuhi-Firuzabad, Mohammad Hossein Mohammadi Sanjani, Ashknaz Oraee
Abstract:
In recent years, the development of communication infrastructure and smart meters have facilitated the utilization of demand-side resources which can enhance stability and economic efficiency of power systems. Direct load control programs can play an important role in the utilization of demand-side resources in the residential sector. However, investments required for installing control equipment can be a limiting factor in the development of such demand response programs. Thus, selection of consumers with higher potentials is crucial to the success of a direct load control program. Heating, ventilation, and air conditioning (HVAC) systems, which due to the heat capacity of buildings feature relatively high flexibility, make up a major part of household consumption. Considering that the consumption of HVAC systems depends highly on the ambient temperature and bearing in mind the high investments required for control systems enabling direct load control demand response programs, in this paper, a recent solution is presented to uncover consumers with high air conditioner demand among large number of consumers and to measure the demand response potential of such consumers. This can pave the way for estimating the investments needed for the implementation of direct load control programs for residential HVAC systems and for estimating the demand response potentials in a distribution system. In doing so, we first cluster consumers into several groups based on the correlation coefficients between hourly consumption data and hourly temperature data using K-means algorithm. Then, by applying a recent algorithm to the hourly consumption and temperature data, consumers with high air conditioner consumption are identified. Finally, demand response potential of such consumers is estimated based on the equivalent desired temperature setpoint changes.Keywords: communication infrastructure, smart meters, power systems, HVAC system, residential HVAC systems
Procedia PDF Downloads 671984 Failure Analysis of the Gasoline Engines Injection System
Authors: Jozef Jurcik, Miroslav Gutten, Milan Sebok, Daniel Korenciak, Jerzy Roj
Abstract:
The paper presents the research results of electronic fuel injection system, which can be used for diagnostics of automotive systems. In the paper is described the construction and operation of a typical fuel injection system and analyzed its electronic part. It has also been proposed method for the detection of the injector malfunction, based on the analysis of differential current or voltage characteristics. In order to detect the fault state, it is needed to use self-learning process, by the use of an appropriate self-learning algorithm.Keywords: electronic fuel injector, diagnostics, measurement, testing device
Procedia PDF Downloads 5521983 Impact of Charging PHEV at Different Penetration Levels on Power System Network
Authors: M. R. Ahmad, I. Musirin, M. M. Othman, N. A. Rahmat
Abstract:
Plug-in Hybrid-Electric Vehicle (PHEV) has gained immense popularity in recent years. PHEV offers numerous advantages compared to the conventional internal-combustion engine (ICE) vehicle. Millions of PHEVs are estimated to be on the road in the USA by 2020. Uncoordinated PHEV charging is believed to cause severe impacts to the power grid; i.e. feeders, lines and transformers overload and voltage drop. Nevertheless, improper PHEV data model used in such studies may cause the findings of their works is in appropriated. Although smart charging is more attractive to researchers in recent years, its implementation is not yet attainable on the street due to its requirement for physical infrastructure readiness and technology advancement. As the first step, it is finest to study the impact of charging PHEV based on real vehicle travel data from National Household Travel Survey (NHTS) and at present charging rate. Due to the lack of charging station on the street at the moment, charging PHEV at home is the best option and has been considered in this work. This paper proposed a technique that comprehensively presents the impact of charging PHEV on power system networks considering huge numbers of PHEV samples with its traveling data pattern. Vehicles Charging Load Profile (VCLP) is developed and implemented in IEEE 30-bus test system that represents a portion of American Electric Power System (Midwestern US). Normalization technique is used to correspond to real time loads at all buses. Results from the study indicated that charging PHEV using opportunity charging will have significant impacts on power system networks, especially whereas bigger battery capacity (kWh) is used as well as for higher penetration level.Keywords: plug-in hybrid electric vehicle, transportation electrification, impact of charging PHEV, electricity demand profile, load profile
Procedia PDF Downloads 2871982 DQN for Navigation in Gazebo Simulator
Authors: Xabier Olaz Moratinos
Abstract:
Drone navigation is critical, particularly during the initial phases, such as the initial ascension, where pilots may fail due to strong external interferences that could potentially lead to a crash. In this ongoing work, a drone has been successfully trained to perform an ascent of up to 6 meters at speeds with external disturbances pushing it up to 24 mph, with the DQN algorithm managing external forces affecting the system. It has been demonstrated that the system can control its height, position, and stability in all three axes (roll, pitch, and yaw) throughout the process. The learning process is carried out in the Gazebo simulator, which emulates interferences, while ROS is used to communicate with the agent.Keywords: machine learning, DQN, gazebo, navigation
Procedia PDF Downloads 1131981 Optimization of the Numerical Fracture Mechanics
Authors: H. Hentati, R. Abdelmoula, Li Jia, A. Maalej
Abstract:
In this work, we present numerical simulations of the quasi-static crack propagation based on the variation approach. We perform numerical simulations of a piece of brittle material without initial crack. An alternate minimization algorithm is used. Based on these numerical results, we determine the influence of numerical parameters on the location of crack. We show the importance of trying to optimize the time of numerical computation and we present the first attempt to develop a simple numerical method to optimize this time.Keywords: fracture mechanics, optimization, variation approach, mechanic
Procedia PDF Downloads 6061980 Towards Learning Query Expansion
Authors: Ahlem Bouziri, Chiraz Latiri, Eric Gaussier
Abstract:
The steady growth in the size of textual document collections is a key progress-driver for modern information retrieval techniques whose effectiveness and efficiency are constantly challenged. Given a user query, the number of retrieved documents can be overwhelmingly large, hampering their efficient exploitation by the user. In addition, retaining only relevant documents in a query answer is of paramount importance for an effective meeting of the user needs. In this situation, the query expansion technique offers an interesting solution for obtaining a complete answer while preserving the quality of retained documents. This mainly relies on an accurate choice of the added terms to an initial query. Interestingly enough, query expansion takes advantage of large text volumes by extracting statistical information about index terms co-occurrences and using it to make user queries better fit the real information needs. In this respect, a promising track consists in the application of data mining methods to extract dependencies between terms, namely a generic basis of association rules between terms. The key feature of our approach is a better trade off between the size of the mining result and the conveyed knowledge. Thus, face to the huge number of derived association rules and in order to select the optimal combination of query terms from the generic basis, we propose to model the problem as a classification problem and solve it using a supervised learning algorithm such as SVM or k-means. For this purpose, we first generate a training set using a genetic algorithm based approach that explores the association rules space in order to find an optimal set of expansion terms, improving the MAP of the search results. The experiments were performed on SDA 95 collection, a data collection for information retrieval. It was found that the results were better in both terms of MAP and NDCG. The main observation is that the hybridization of text mining techniques and query expansion in an intelligent way allows us to incorporate the good features of all of them. As this is a preliminary attempt in this direction, there is a large scope for enhancing the proposed method.Keywords: supervised leaning, classification, query expansion, association rules
Procedia PDF Downloads 3251979 Automatic Vowel and Consonant's Target Formant Frequency Detection
Authors: Othmane Bouferroum, Malika Boudraa
Abstract:
In this study, a dual exponential model for CV formant transition is derived from locus theory of speech perception. Then, an algorithm for automatic vowel and consonant’s target formant frequency detection is developed and tested on real speech. The results show that vowels and consonants are detected through transitions rather than their small stable portions. Also, vowel reduction is clearly observed in our data. These results are confirmed by the observations made in perceptual experiments in the literature.Keywords: acoustic invariance, coarticulation, formant transition, locus equation
Procedia PDF Downloads 2701978 Assessment of Mortgage Applications Using Fuzzy Logic
Authors: Swathi Sampath, V. Kalaichelvi
Abstract:
The assessment of the risk posed by a borrower to a lender is one of the common problems that financial institutions have to deal with. Consumers vying for a mortgage are generally compared to each other by the use of a number called the Credit Score, which is generated by applying a mathematical algorithm to information in the applicant’s credit report. The higher the credit score, the lower the risk posed by the candidate, and the better he is to be taken on by the lender. The objective of the present work is to use fuzzy logic and linguistic rules to create a model that generates Credit Scores.Keywords: credit scoring, fuzzy logic, mortgage, risk assessment
Procedia PDF Downloads 4051977 Limit-Cycles Method for the Navigation and Avoidance of Any Form of Obstacles for Mobile Robots in Cluttered Environment
Authors: F. Boufera, F. Debbat
Abstract:
This paper deals with an approach based on limit-cycles method for the problem of obstacle avoidance of mobile robots in unknown environments for any form of obstacles. The purpose of this approach is the improvement of limit-cycles method in order to obtain safe and flexible navigation. The proposed algorithm has been successfully tested in different configuration on simulation.Keywords: mobile robot, navigation, avoidance of obstacles, limit-cycles method
Procedia PDF Downloads 429