Search results for: subsidiary performance
7675 Towards a Simulation Model to Ensure the Availability of Machines in Maintenance Activities
Authors: Maryam Gallab, Hafida Bouloiz, Youness Chater, Mohamed Tkiouat
Abstract:
The aim of this paper is to present a model based on multi-agent systems in order to manage the maintenance activities and to ensure the reliability and availability of machines just with the required resources (operators, tools). The interest of the simulation is to solve the complexity of the system and to find results without cost or wasting time. An implementation of the model is carried out on the AnyLogic platform to display the defined performance indicators.Keywords: maintenance, complexity, simulation, multi-agent systems, AnyLogic platform
Procedia PDF Downloads 3057674 Construction of the Large Scale Biological Networks from Microarrays
Authors: Fadhl Alakwaa
Abstract:
One of the sustainable goals of the system biology is understanding gene-gene interactions. Hence, gene regulatory networks (GRN) need to be constructed for understanding the disease ontology and to reduce the cost of drug development. To construct gene regulatory from gene expression we need to overcome many challenges such as data denoising and dimensionality. In this paper, we develop an integrated system to reduce data dimension and remove the noise. The generated network from our system was validated via available interaction databases and was compared to previous methods. The result revealed the performance of our proposed method.Keywords: gene regulatory network, biclustering, denoising, system biology
Procedia PDF Downloads 2397673 Permeable Reactive Pavement for Controlling the Transport of Benzene, Toluene, Ethyl-Benzene, and Xylene (BTEX) Contaminants
Authors: Shengyi Huang, Chenju Liang
Abstract:
Volatile organic compounds such as benzene, toluene, ethyl-benzene, and xylene (BTEX) are common contaminants in environment, which could come from asphalt concrete or exhaust emissions of vehicles. The BTEX may invade to the subsurface environment via wet and dry atmospheric depositions. If there aren’t available ways for controlling contaminants’ fate and transport, they would extensively harm natural environment. In the 1st phase of this study, various adsorbents were screened for a suitable one to be an additive in the porous asphalt mixture. In the 2nd phase, addition of the selected adsorbent was incorporated with the design of porous asphalt concrete (PAC) to produce the permeable reactive pavement (PRP), which was subsequently tested for the potential of adsorbing aqueous BTEX as compared to the PAC, in the 3rd phase. The PRP was prepared according to the following steps: firstly, the suitable adsorbent was chosen based on the analytical results of specific surface area analysis, thermal-gravimetric analysis, adsorption kinetics and isotherms, and thermal dynamics analysis; secondly, the materials of coarse aggregate, fine aggregate, filler, asphalt, and fiber were tested in order to meet regulated specifications (e.g., water adsorption, soundness, viscosity etc.) for preparing the PRP; thirdly, the amount of adsorbent additive was determined in the PRP; fourthly, the prepared PAC and PRP were examined for their physical properties (e.g., abrasion loss, drain-down loss, Marshall stability, Marshall flow, dynamic stability etc.). As a result of comparison between PRP and PAC, the PRP showed better physical performance than the traditional PAC. At last, the Marshall Specimen column tests were conducted to explore the adsorption capacities of PAC and PRPs. The BTEX adsorption capacities of PRPs are higher than those obtained from traditional PAC. In summary, PRPs showed superior physical performance and adsorption capacities, which exhibit the potential of PRP to be applied as a replacement of PAC for better controlling the transport of non-point source pollutants.Keywords: porous asphalt concrete, volatile organic compounds, permeable reactive pavement, non-point source pollution
Procedia PDF Downloads 2117672 Cognitive Relaying in Interference Limited Spectrum Sharing Environment: Outage Probability and Outage Capacity
Authors: Md Fazlul Kader, Soo Young Shin
Abstract:
In this paper, we consider a cognitive relay network (CRN) in which the primary receiver (PR) is protected by peak transmit power $\bar{P}_{ST}$ and/or peak interference power Q constraints. In addition, the interference effect from the primary transmitter (PT) is considered to show its impact on the performance of the CRN. We investigate the outage probability (OP) and outage capacity (OC) of the CRN by deriving closed-form expressions over Rayleigh fading channel. Results show that both the OP and OC improve by increasing the cooperative relay nodes as well as when the PT is far away from the SR.Keywords: cognitive relay, outage, interference limited, decode-and-forward (DF)
Procedia PDF Downloads 5127671 Performance Analysis of Hierarchical Agglomerative Clustering in a Wireless Sensor Network Using Quantitative Data
Authors: Tapan Jain, Davender Singh Saini
Abstract:
Clustering is a useful mechanism in wireless sensor networks which helps to cope with scalability and data transmission problems. The basic aim of our research work is to provide efficient clustering using Hierarchical agglomerative clustering (HAC). If the distance between the sensing nodes is calculated using their location then it’s quantitative HAC. This paper compares the various agglomerative clustering techniques applied in a wireless sensor network using the quantitative data. The simulations are done in MATLAB and the comparisons are made between the different protocols using dendrograms.Keywords: routing, hierarchical clustering, agglomerative, quantitative, wireless sensor network
Procedia PDF Downloads 6157670 Mechanical Properties of Hybrid Cement Based Mortars Containing Two Biopolymers
Authors: Z. Abdollahnejad, M. Kheradmand, F. Pacheco-Torgal
Abstract:
The use of bio-based admixtures on construction materials is a recent trend that is gaining momentum. However, to our knowledge, no studies have been reported concerning the use of biopolymers on hybrid cement based mortars. This paper reports experimental results regarding the study of the influence of mix design of 43 hybrid cement mortars containing two different biopolymers on its mechanical performance. The results show that the use of the biopolymer carrageenan is much more effective than the biopolymer xanthan concerning the increase in compressive strength. An optimum biopolymer content was found.Keywords: waste reuse, fly ash, waste glass, hybrid cement, biopolymers, mechanical strength
Procedia PDF Downloads 3027669 Connected Objects with Optical Rectenna for Wireless Information Systems
Authors: Chayma Bahar, Chokri Baccouch, Hedi Sakli, Nizar Sakli
Abstract:
Harvesting and transport of optical and radiofrequency signals are a topical subject with multiple challenges. In this paper, we present a Optical RECTENNA system. We propose here a hybrid system solar cell antenna for 5G mobile communications networks. Thus, we propose rectifying circuit. A parametric study is done to follow the influence of load resistance and input power on Optical RECTENNA system performance. Thus, we propose a solar cell antenna structure in the frequency band of future 5G standard in 2.45 GHz bands.Keywords: antenna, IoT, optical rectenna, solar cell
Procedia PDF Downloads 1787668 Synthesis of High-Pressure Performance Adsorbent from Coconut Shells Polyetheretherketone for Methane Adsorption
Authors: Umar Hayatu Sidik
Abstract:
Application of liquid base petroleum fuel (petrol and diesel) for transportation fuel causes emissions of greenhouse gases (GHGs), while natural gas (NG) reduces the emissions of greenhouse gases (GHGs). At present, compression and liquefaction are the most matured technology used for transportation system. For transportation use, compression requires high pressure (200–300 bar) while liquefaction is impractical. A relatively low pressure of 30-40 bar is achievable by adsorbed natural gas (ANG) to store nearly compressed natural gas (CNG). In this study, adsorbents for high-pressure adsorption of methane (CH4) was prepared from coconut shells and polyetheretherketone (PEEK) using potassium hydroxide (KOH) and microwave-assisted activation. Design expert software version 7.1.6 was used for optimization and prediction of preparation conditions of the adsorbents for CH₄ adsorption. Effects of microwave power, activation time and quantity of PEEK on the adsorbents performance toward CH₄ adsorption was investigated. The adsorbents were characterized by Fourier transform infrared spectroscopy (FTIR), thermogravimetric (TG) and derivative thermogravimetric (DTG) and scanning electron microscopy (SEM). The ideal CH4 adsorption capacities of adsorbents were determined using volumetric method at pressures of 5, 17, and 35 bar at an ambient temperature and 5 oC respectively. Isotherm and kinetics models were used to validate the experimental results. The optimum preparation conditions were found to be 15 wt% amount of PEEK, 3 minutes activation time and 300 W microwave power. The highest CH4 uptake of 9.7045 mmol CH4 adsorbed/g adsorbent was recorded by M33P15 (300 W of microwave power, 3 min activation time and 15 wt% amount of PEEK) among the sorbents at an ambient temperature and 35 bar. The CH4 equilibrium data is well correlated with Sips, Toth, Freundlich and Langmuir. Isotherms revealed that the Sips isotherm has the best fit, while the kinetics studies revealed that the pseudo-second-order kinetic model best describes the adsorption process. In all scenarios studied, a decrease in temperature led to an increase in adsorption of both gases. The adsorbent (M33P15) maintained its stability even after seven adsorption/desorption cycles. The findings revealed the potential of coconut shell-PEEK as CH₄ adsorbents.Keywords: adsorption, desorption, activated carbon, coconut shells, polyetheretherketone
Procedia PDF Downloads 677667 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 2037666 Thermal Evaluation of Printed Circuit Board Design Options and Voids in Solder Interface by a Simulation Tool
Authors: B. Arzhanov, A. Correia, P. Delgado, J. Meireles
Abstract:
Quad Flat No-Lead (QFN) packages have become very popular for turners, converters and audio amplifiers, among others applications, needing efficient power dissipation in small footprints. Since semiconductor junction temperature (TJ) is a critical parameter in the product quality. And to ensure that die temperature does not exceed the maximum allowable TJ, a thermal analysis conducted in an earlier development phase is essential to avoid repeated re-designs process with huge losses in cost and time. A simulation tool capable to estimate die temperature of components with QFN package was developed. Allow establish a non-empirical way to define an acceptance criterion for amount of voids in solder interface between its exposed pad and Printed Circuit Board (PCB) to be applied during industrialization process, and evaluate the impact of PCB designs parameters. Targeting PCB layout designer as an end user for the application, a user-friendly interface (GUI) was implemented allowing user to introduce design parameters in a convenient and secure way and hiding all the complexity of finite element simulation process. This cost effective tool turns transparent a simulating process and provides useful outputs after acceptable time, which can be adopted by PCB designers, preventing potential risks during the design stage and make product economically efficient by not oversizing it. This article gathers relevant information related to the design and implementation of the developed tool, presenting a parametric study conducted with it. The simulation tool was experimentally validated using a Thermal-Test-Chip (TTC) in a QFN open-cavity, in order to measure junction temperature (TJ) directly on the die under controlled and knowing conditions. Providing a short overview about standard thermal solutions and impacts in exposed pad packages (i.e. QFN), accurately describe the methods and techniques that the system designer should use to achieve optimum thermal performance, and demonstrate the effect of system-level constraints on the thermal performance of the design.Keywords: QFN packages, exposed pads, junction temperature, thermal management and measurements
Procedia PDF Downloads 2567665 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 3097664 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks
Authors: Fazıl Gökgöz, Fahrettin Filiz
Abstract:
Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.Keywords: deep learning, long short term memory, energy, renewable energy load forecasting
Procedia PDF Downloads 2667663 On Coverage Probability of Confidence Intervals for the Normal Mean with Known Coefficient of Variation
Authors: Suparat Niwitpong, Sa-aat Niwitpong
Abstract:
Statistical inference of normal mean with known coefficient of variation has been investigated recently. This phenomenon occurs normally in environment and agriculture experiments when the scientist knows the coefficient of variation of their experiments. In this paper, we constructed new confidence intervals for the normal population mean with known coefficient of variation. We also derived analytic expressions for the coverage probability of each confidence interval. To confirm our theoretical results, Monte Carlo simulation will be used to assess the performance of these intervals based on their coverage probabilities.Keywords: confidence interval, coverage probability, expected length, known coefficient of variation
Procedia PDF Downloads 3927662 Emergency Physician Performance for Hydronephrosis Diagnosis and Grading Compared with Radiologist Assessment in Renal Colic: The EPHyDRA Study
Authors: Sameer A. Pathan, Biswadev Mitra, Salman Mirza, Umais Momin, Zahoor Ahmed, Lubna G. Andraous, Dharmesh Shukla, Mohammed Y. Shariff, Magid M. Makki, Tinsy T. George, Saad S. Khan, Stephen H. Thomas, Peter A. Cameron
Abstract:
Study objective: Emergency physician’s (EP) ability to identify hydronephrosis on point-of-care ultrasound (POCUS) has been assessed in the past using CT scan as the reference standard. We aimed to assess EP interpretation of POCUS to identify and grade the hydronephrosis in a direct comparison with the consensus-interpretation of POCUS by radiologists, and also to compare the EP and radiologist performance using CT scan as the criterion standard. Methods: Using data from a POCUS databank, a prospective interpretation study was conducted at an urban academic emergency department. All POCUS exams were performed on patients presenting with renal colic to the ED. Institutional approval was obtained for conducting this study. All the analyses were performed using Stata MP 14.0 (Stata Corp, College Station, Texas). Results: A total of 651 patients were included, with paired sets of renal POCUS video clips and the CT scan performed at the same ED visit. Hydronephrosis was reported in 69.6% of POCUS exams by radiologists and 72.7% of CT scans (p=0.22). The κ for consensus interpretation of POCUS between the radiologists to detect hydronephrosis was 0.77 (0.72 to 0.82) and weighted κ for grading the hydronephrosis was 0.82 (0.72 to 0.90), interpreted as good to very good. Using CT scan findings as the criterion standard, Eps had an overall sensitivity of 81.1% (95% CI: 79.6% to 82.5%), specificity of 59.4% (95% CI: 56.4% to 62.5%), PPV of 84.3% (95% CI: 82.9% to 85.7%), and NPV of 53.8% (95% CI: 50.8% to 56.7%); compared to radiologist sensitivity of 85.0% (95% CI: 82.5% to 87.2%), specificity of 79.7% (95% CI: 75.1% to 83.7%), PPV of 91.8% (95% CI: 89.8% to 93.5%), and NPV of 66.5% (95% CI: 61.8% to 71.0%). Testing for a report of moderate or high degree of hydronephrosis, specificity of EP was 94.6% (95% CI: 93.7% to 95.4%) and to 99.2% (95% CI: 98.9% to 99.5%) for identifying severe hydronephrosis alone. Conclusion: EP POCUS interpretations were comparable to the radiologists for identifying moderate to severe hydronephrosis using CT scan results as the criterion standard. Among patients with moderate or high pre-test probability of ureteric calculi, as calculated by the STONE-score, the presence of moderate to severe (+LR 6.3 and –LR 0.69) or severe hydronephrosis (+LR 54.4 and –LR 0.57) was highly diagnostic of the stone disease. Low dose CT is indicated in such patients for evaluation of stone size and location.Keywords: renal colic, point-of-care, ultrasound, bedside, emergency physician
Procedia PDF Downloads 2847661 On the Influence of Sleep Habits for Predicting Preterm Births: A Machine Learning Approach
Authors: C. Fernandez-Plaza, I. Abad, E. Diaz, I. Diaz
Abstract:
Births occurring before the 37th week of gestation are considered preterm births. A threat of preterm is defined as the beginning of regular uterine contractions, dilation and cervical effacement between 23 and 36 gestation weeks. To author's best knowledge, the factors that determine the beginning of the birth are not completely defined yet. In particular, the incidence of sleep habits on preterm births is weekly studied. The aim of this study is to develop a model to predict the factors affecting premature delivery on pregnancy, based on the above potential risk factors, including those derived from sleep habits and light exposure at night (introduced as 12 variables obtained by a telephone survey using two questionnaires previously used by other authors). Thus, three groups of variables were included in the study (maternal, fetal and sleep habits). The study was approved by Research Ethics Committee of the Principado of Asturias (Spain). An observational, retrospective and descriptive study was performed with 481 births between January 1, 2015 and May 10, 2016 in the University Central Hospital of Asturias (Spain). A statistical analysis using SPSS was carried out to compare qualitative and quantitative variables between preterm and term delivery. Chi-square test qualitative variable and t-test for quantitative variables were applied. Statistically significant differences (p < 0.05) between preterm vs. term births were found for primiparity, multi-parity, kind of conception, place of residence or premature rupture of membranes and interruption during nights. In addition to the statistical analysis, machine learning methods to look for a prediction model were tested. In particular, tree based models were applied as the trade-off between performance and interpretability is especially suitable for this study. C5.0, recursive partitioning, random forest and tree bag models were analysed using caret R-package. Cross validation with 10-folds and parameter tuning to optimize the methods were applied. In addition, different noise reduction methods were applied to the initial data using NoiseFiltersR package. The best performance was obtained by C5.0 method with Accuracy 0.91, Sensitivity 0.93, Specificity 0.89 and Precision 0.91. Some well known preterm birth factors were identified: Cervix Dilation, maternal BMI, Premature rupture of membranes or nuchal translucency analysis in the first trimester. The model also identifies other new factors related to sleep habits such as light through window, bedtime on working days, usage of electronic devices before sleeping from Mondays to Fridays or change of sleeping habits reflected in the number of hours, in the depth of sleep or in the lighting of the room. IF dilation < = 2.95 AND usage of electronic devices before sleeping from Mondays to Friday = YES and change of sleeping habits = YES, then preterm is one of the predicting rules obtained by C5.0. In this work a model for predicting preterm births is developed. It is based on machine learning together with noise reduction techniques. The method maximizing the performance is the one selected. This model shows the influence of variables related to sleep habits in preterm prediction.Keywords: machine learning, noise reduction, preterm birth, sleep habit
Procedia PDF Downloads 1477660 The Effect of Stent Coating on the Stent Flexibility: Comparison of Covered Stent and Bare Metal Stent
Authors: Keping Zuo, Foad Kabinejadian, Gideon Praveen Kumar Vijayakumar, Fangsen Cui, Pei Ho, Hwa Liang Leo
Abstract:
Carotid artery stenting (CAS) is the standard procedure for patients with severe carotid stenosis at high risk for carotid endarterectomy (CAE). A major drawback of CAS is the higher incidence of procedure-related stroke compared with traditional open surgical treatment for carotid stenosis - CEA, even with the use of the embolic protection devices (EPD). As the currently available bare metal stents cannot address this problem, our research group developed a novel preferential covered-stent for carotid artery aims to prevent friable fragments of atherosclerotic plaques from flowing into the cerebral circulation, and yet maintaining the flow of the external carotid artery. The preliminary animal studies have demonstrated the potential of this novel covered-stent design for the treatment of carotid atherosclerotic stenosis. The purpose of this study is to evaluate the effect of membrane coating on the stent flexibility in order to improve the clinical performance of our novel covered stents. A total of 21 stents were evaluated in this study: 15 self expanding bare nitinol stents and 6 PTFE-covered stents. 10 of the bare stents were coated with 11%, 16% and 22% Polyurethane(PU), 4%, 6.25% and 11% EE, as well as 22% PU plus 5 μm Parylene. Different laser cutting designs were performed on 4 of the PTFE covert stents. All the stents, with or without the covered membrane, were subjected to a three-point flexural test. The stents were placed on two supports that are 30 mm apart, and the actuator is applying a force in the exact middle of the two supports with a loading pin with radius 2.5 mm. The loading pin displacement change, the force and the variation in stent shape were recorded for analysis. The flexibility of the stents was evaluated by the lumen area preservation at three displacement bending levels: 5mm, 7mm, and 10mm. All the lumen areas in all stents decreased with the increase of the displacement from 0 to 10 mm. The bare stents were able to maintain 0.864 ± 0.015, 0.740 ± 0.025 and 0.597 ± 0.031of original lumen area at 5 mm, 7 mm and 10mm displacement respectively. For covered stents, the stents with EE coating membrane showed the best lumen area preservation (0.839 ± 0.005, 0.7334 ± 0.043 and 0.559 ± 0.014), whereas, the stents with PU and Parylene coating were only 0.662, 0.439 and 0.305. Bending stiffness was also calculated and compared. These results provided optimal material information and it was crucial for enhancing clinical performance of our novel covered stents.Keywords: carotid artery, covered stent, nonlinear, hyperelastic, stress, strain
Procedia PDF Downloads 2957659 Experimental investigation on the lithium-Ion Battery Thermal Management System Based on Micro Heat Pipe Array in High Temperature Environment
Authors: Ruyang Ren, Yaohua Zhao, Yanhua Diao
Abstract:
The intermittent and unstable characteristics of renewable energy such as solar energy can be effectively solved through battery energy storage system. Lithium-ion battery is widely used in battery energy storage system because of its advantages of high energy density, small internal resistance, low self-discharge rate, no memory effect and long service life. However, the performance and service life of lithium-ion battery is seriously affected by its operating temperature. Thus, the safety operation of the lithium-ion battery module is inseparable from an effective thermal management system (TMS). In this study, a new type of TMS based on micro heat pipe array (MHPA) for lithium-ion battery is established, and the TMS is applied to a battery energy storage box that needs to operate at a high temperature environment of 40 °C all year round. MHPA is a flat shape metal body with high thermal conductivity and excellent temperature uniformity. The battery energy storage box is composed of four battery modules, with a nominal voltage of 51.2 V, a nominal capacity of 400 Ah. Through the excellent heat transfer characteristics of the MHPA, the heat generated by the charge and discharge process can be quickly transferred out of the battery module. In addition, if only the MHPA cannot meet the heat dissipation requirements of the battery module, the TMS can automatically control the opening of the external fan outside the battery module according to the temperature of the battery, so as to further enhance the heat dissipation of the battery module. The thermal management performance of lithium-ion battery TMS based on MHPA is studied experimentally under different ambient temperatures and the condition to turn on the fan or not. Results show that when the ambient temperature is 40 °C and the fan is not turned on in the whole charge and discharge process, the maximum temperature of the battery in the energy storage box is 53.1 °C and the maximum temperature difference in the battery module is 2.4 °C. After the fan is turned on in the whole charge and discharge process, the maximum temperature is reduced to 50.1 °C, and the maximum temperature difference is reduced to 1.7 °C. Obviously, the lithium-ion battery TMS based on MHPA not only could control the maximum temperature of the battery below 55 °C, but also ensure the excellent temperature uniformity of the battery module. In conclusion, the lithium-ion battery TMS based on MHPA can ensure the safe and stable operation of the battery energy storage box in high temperature environment.Keywords: heat dissipation, lithium-ion battery thermal management, micro heat pipe array, temperature uniformity
Procedia PDF Downloads 1817658 Seamless Mobility in Heterogeneous Mobile Networks
Authors: Mohab Magdy Mostafa Mohamed
Abstract:
The objective of this paper is to introduce a vertical handover (VHO) algorithm between wireless LANs (WLANs) and LTE mobile networks. The proposed algorithm is based on the fuzzy control theory and takes into consideration power level, subscriber velocity, and target cell load instead of only power level in traditional algorithms. Simulation results show that network performance in terms of number of handovers and handover occurrence distance is improved.Keywords: vertical handover, fuzzy control theory, power level, speed, target cell load
Procedia PDF Downloads 3537657 Evaluation of a Surrogate Based Method for Global Optimization
Authors: David Lindström
Abstract:
We evaluate the performance of a numerical method for global optimization of expensive functions. The method is using a response surface to guide the search for the global optimum. This metamodel could be based on radial basis functions, kriging, or a combination of different models. We discuss how to set the cycling parameters of the optimization method to get a balance between local and global search. We also discuss the eventual problem with Runge oscillations in the response surface.Keywords: expensive function, infill sampling criterion, kriging, global optimization, response surface, Runge phenomenon
Procedia PDF Downloads 5787656 Behavior Evaluation of an Anchored Wall
Authors: Polo G. Yohn Edison, Rocha F. Pedricto
Abstract:
This work presents a study about a retaining structure designed for the duplication of the rail FEPASA on the 74th km between Santos and São Paulo. This structure, an anchored retaining wall, was instrumented in the anchors heads with strain gauges in order to monitor its loads. The load measurements occurred during the performance test, locking and also after the works were concluded. A decrease on anchors loads is noticed at the moment immediately after the locking, during construction and after the works finished. It was observed that a loss of load in the anchors occurred to a maximum of 54%.Keywords: instrumentation, strain gauges, retaining wall, anchors
Procedia PDF Downloads 4957655 Validation of Two Field Base Dynamic Balance Tests in the Activation of Selected Hip and Knee Stabilizer Muscles
Authors: Mariam A. Abu-Alim
Abstract:
The purpose of this study was to validate muscle activation amplitudes of two field base dynamic balance tests that are used as strengthen and motor control exercises too in the activation of selected hip and knee stabilizer muscles. Methods: Eighteen college-age females students (21±2 years; 65.6± 8.7 kg; 169.7±8.1 cm) who participated at least for 30 minutes in physical activity most days of the week volunteered. The wireless BIOPAC (MP150, BIOPAC System. Inc, California, USA) surface electromyography system was used to validate the activation of the Gluteus Medius and the Adductor Magnus of hip stabilizer muscles; and the Hamstrings, Quadriceps, and the Gastrocnemius of the knee stabilizer muscles. Surface electrodes (EL 503, BIOPAC, System. Inc) connected to dual wireless EMG BioNormadix Transmitters were place on selected muscles of participants dominate side. Manual muscle testing was performed to obtain the maximal voluntary isometric contraction (MVIC) in which all collected muscle activity data during the three reaching direction: anterior, posteromedial, posterolateral of the Star Excursion Balance Test (SEBT) and the Y-balance Test (YBT) data could be normalized. All participants performed three trials for each reaching direction of the SEBT and the YBT. The domanial leg trial for each participant was selected for analysis which was also the standing leg. Results: the selected hip stabilizer muscles (Gluteus Medius, Adductor Magnus) were both greater than 100%MVIC during the performance of the SEBT and in all three directions. Whereas, selected knee stabilizer muscles had greater activation 0f 100% MVIC and were significantly more activated during the performance of the YBT test in all three reaching directions. The results showed that the posterolateral and the postmedial reaching directions for both dynamic balance tests had greater activation levels and greater than 200%MVIC for all tested muscles expect of the hamstrings. Conclusion: the results of this study showed that the SEBT and the YBT had validated high levels of muscular activity for the hip and the knee stabilizer muscles; which can be used to represent the improvement, strength, control and the decreasing in the injury levels. Since these selected hip and knee stabilizer muscles, represent 35% of all athletic injuries depending on the type of sport.Keywords: dynamic balance tests, electromyography, hip stabilizer muscles, nee stabilizer muscles
Procedia PDF Downloads 1517654 An Approach on the Design of a Solar Cell Characterization Device
Authors: Christoph Mayer, Dominik Holzmann
Abstract:
This paper presents the development of a compact, portable and easy to handle solar cell characterization device. The presented device reduces the effort and cost of single solar cell characterization to a minimum. It enables realistic characterization of cells under sunlight within minutes. In the field of photovoltaic research the common way to characterize a single solar cell or a module is, to measure the current voltage curve. With this characteristic the performance and the degradation rate can be defined which are important for the consumer or developer. The paper consists of the system design description, a summary of the measurement results and an outline for further developments.Keywords: solar cell, photovoltaics, PV, characterization
Procedia PDF Downloads 4217653 Duration of Isolated Vowels in Infants with Cochlear Implants
Authors: Paris Binos
Abstract:
The present work investigates developmental aspects of the duration of isolated vowels in infants with normal hearing compared to those who received cochlear implants (CIs) before two years of age. Infants with normal hearing produced shorter vowel duration since this find related with more mature production abilities. First isolated vowels are transparent during the protophonic stage as evidence of an increased motor and linguistic control. Vowel duration is a crucial factor for the transition of prelexical speech to normal adult speech. Despite current knowledge of data for infants with normal hearing more research is needed to unravel productions skills in early implanted children. Thus, isolated vowel productions by two congenitally hearing-impaired Greek infants (implantation ages 1:4-1:11; post-implant ages 0:6-1:3) were recorded and sampled for six months after implantation with a Nucleus-24. The results compared with the productions of three normal hearing infants (chronological ages 0:8-1:1). Vegetative data and vocalizations masked by external noise or sounds were excluded. Participants had no other disabilities and had unknown deafness etiology. Prior to implantation the infants had an average unaided hearing loss of 95-110 dB HL while the post-implantation PTA decreased to 10-38 dB HL. The current research offers a methodology for the processing of the prelinguistic productions based on a combination of acoustical and auditory analyses. Based on the current methodological framework, duration measured through spectrograms based on wideband analysis, from the voicing onset to the end of the vowel. The end marked by two co-occurring events: 1) The onset of aperiodicity with a rapid change in amplitude in the waveform and 2) a loss in formant’s energy. Cut-off levels of significance were set at 0.05 for all tests. Bonferroni post hoc tests indicated that difference was significant between the mean duration of vowels of infants wearing CIs and their normal hearing peers. Thus, the mean vowel duration of CIs measured longer compared to the normal hearing peers (0.000). The current longitudinal findings contribute to the existing data for the performance of children wearing CIs at a very young age and enrich also the data of the Greek language. The above described weakness for CI’s performance is a challenge for future work in speech processing and CI’s processing strategies.Keywords: cochlear implant, duration, spectrogram, vowel
Procedia PDF Downloads 2617652 Importance of Human Resources Training in an Information Age
Authors: A. Serap Fırat
Abstract:
The aim of this study is to display conceptually the relationship and interaction between matter of human resources training and the information age. Fast development from industrial community to an information community has occurred and organizations have been seeking ways to overcome this change. Human resources policy and human capital with enhanced competence will have direct impact on work performance; therefore, this paper deals with the increased importance of human resource management due to the fact that it nurtures human capital. Researching and scanning are used as a method in this study. Both local and foreign literature and expert views are employed -as much as one could be- in the making of the theoretical framework of this study.Keywords: human resources, information age, education, organization, occupation
Procedia PDF Downloads 3727651 Parallel Multisplitting Methods for DAE’s
Authors: Ahmed Machmoum, Malika El Kyal
Abstract:
We consider iterative parallel multi-splitting method for differential algebraic equations. The main feature of the proposed idea is to use the asynchronous form. We prove that the multi-splitting technique can effectively accelerate the convergent performance of the iterative process. The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays tobe substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.Keywords: computer, multi-splitting methods, asynchronous mode, differential algebraic systems
Procedia PDF Downloads 5497650 Implications of Circular Economy on Users Data Privacy: A Case Study on Android Smartphones Second-Hand Market
Authors: Mariia Khramova, Sergio Martinez, Duc Nguyen
Abstract:
Modern electronic devices, particularly smartphones, are characterised by extremely high environmental footprint and short product lifecycle. Every year manufacturers release new models with even more superior performance, which pushes the customers towards new purchases. As a result, millions of devices are being accumulated in the urban mine. To tackle these challenges the concept of circular economy has been introduced to promote repair, reuse and recycle of electronics. In this case, electronic devices, that previously ended up in landfills or households, are getting the second life, therefore, reducing the demand for new raw materials. Smartphone reuse is gradually gaining wider adoption partly due to the price increase of flagship models, consequently, boosting circular economy implementation. However, along with reuse of communication device, circular economy approach needs to ensure the data of the previous user have not been 'reused' together with a device. This is especially important since modern smartphones are comparable with computers in terms of performance and amount of data stored. These data vary from pictures, videos, call logs to social security numbers, passport and credit card details, from personal information to corporate confidential data. To assess how well the data privacy requirements are followed on smartphones second-hand market, a sample of 100 Android smartphones has been purchased from IT Asset Disposition (ITAD) facilities responsible for data erasure and resell. Although devices should not have stored any user data by the time they leave ITAD, it has been possible to retrieve the data from 19% of the sample. Applied techniques varied from manual device inspection to sophisticated equipment and tools. These findings indicate significant barrier in implementation of circular economy and a limitation of smartphone reuse. Therefore, in order to motivate the users to donate or sell their old devices and make electronic use more sustainable, data privacy on second-hand smartphone market should be significantly improved. Presented research has been carried out in the framework of sustainablySMART project, which is part of Horizon 2020 EU Framework Programme for Research and Innovation.Keywords: android, circular economy, data privacy, second-hand phones
Procedia PDF Downloads 1287649 Criticality Assessment Model for Water Pipelines Using Fuzzy Analytical Network Process
Abstract:
Water networks (WNs) are responsible of providing adequate amounts of safe, high quality, water to the public. As other critical infrastructure systems, WNs are subjected to deterioration which increases the number of breaks and leaks and lower water quality. In Canada, 35% of water assets require critical attention and there is a significant gap between the needed and the implemented investments. Thus, the need for efficient rehabilitation programs is becoming more urgent given the paradigm of aging infrastructure and tight budget. The first step towards developing such programs is to formulate a Performance Index that reflects the current condition of water assets along with its criticality. While numerous studies in the literature have focused on various aspects of condition assessment and reliability, limited efforts have investigated the criticality of such components. Critical water mains are those whose failure cause significant economic, environmental or social impacts on a community. Inclusion of criticality in computing the performance index will serve as a prioritizing tool for the optimum allocating of the available resources and budget. In this study, several social, economic, and environmental factors that dictate the criticality of a water pipelines have been elicited from analyzing the literature. Expert opinions were sought to provide pairwise comparisons of the importance of such factors. Subsequently, Fuzzy Logic along with Analytical Network Process (ANP) was utilized to calculate the weights of several criteria factors. Multi Attribute Utility Theories (MAUT) was then employed to integrate the aforementioned weights with the attribute values of several pipelines in Montreal WN. The result is a criticality index, 0-1, that quantifies the severity of the consequence of failure of each pipeline. A novel contribution of this approach is that it accounts for both the interdependency between criteria factors as well as the inherited uncertainties in calculating the criticality. The practical value of the current study is represented by the automated tool, Excel-MATLAB, which can be used by the utility managers and decision makers in planning for future maintenance and rehabilitation activities where high-level efficiency in use of materials and time resources is required.Keywords: water networks, criticality assessment, asset management, fuzzy analytical network process
Procedia PDF Downloads 1477648 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 1827647 Finite Sample Inferences for Weak Instrument Models
Authors: Gubhinder Kundhi, Paul Rilstone
Abstract:
It is well established that Instrumental Variable (IV) estimators in the presence of weak instruments can be poorly behaved, in particular, be quite biased in finite samples. Finite sample approximations to the distributions of these estimators are obtained using Edgeworth and Saddlepoint expansions. Departures from normality of the distributions of these estimators are analyzed using higher order analytical corrections in these expansions. In a Monte-Carlo experiment, the performance of these expansions is compared to the first order approximation and other methods commonly used in finite samples such as the bootstrap.Keywords: bootstrap, Instrumental Variable, Edgeworth expansions, Saddlepoint expansions
Procedia PDF Downloads 3107646 Best-Performing Color Space for Land-Sea Segmentation Using Wavelet Transform Color-Texture Features and Fusion of over Segmentation
Authors: Seynabou Toure, Oumar Diop, Kidiyo Kpalma, Amadou S. Maiga
Abstract:
Color and texture are the two most determinant elements for perception and recognition of the objects in an image. For this reason, color and texture analysis find a large field of application, for example in image classification and segmentation. But, the pioneering work in texture analysis was conducted on grayscale images, thus discarding color information. Many grey-level texture descriptors have been proposed and successfully used in numerous domains for image classification: face recognition, industrial inspections, food science medical imaging among others. Taking into account color in the definition of these descriptors makes it possible to better characterize images. Color texture is thus the subject of recent work, and the analysis of color texture images is increasingly attracting interest in the scientific community. In optical remote sensing systems, sensors measure separately different parts of the electromagnetic spectrum; the visible ones and even those that are invisible to the human eye. The amounts of light reflected by the earth in spectral bands are then transformed into grayscale images. The primary natural colors Red (R) Green (G) and Blue (B) are then used in mixtures of different spectral bands in order to produce RGB images. Thus, good color texture discrimination can be achieved using RGB under controlled illumination conditions. Some previous works investigate the effect of using different color space for color texture classification. However, the selection of the best performing color space in land-sea segmentation is an open question. Its resolution may bring considerable improvements in certain applications like coastline detection, where the detection result is strongly dependent on the performance of the land-sea segmentation. The aim of this paper is to present the results of a study conducted on different color spaces in order to show the best-performing color space for land-sea segmentation. In this sense, an experimental analysis is carried out using five different color spaces (RGB, XYZ, Lab, HSV, YCbCr). For each color space, the Haar wavelet decomposition is used to extract different color texture features. These color texture features are then used for Fusion of Over Segmentation (FOOS) based classification; this allows segmentation of the land part from the sea one. By analyzing the different results of this study, the HSV color space is found as the best classification performance while using color and texture features; which is perfectly coherent with the results presented in the literature.Keywords: classification, coastline, color, sea-land segmentation
Procedia PDF Downloads 247