Search results for: an optimal error bound
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5157

Search results for: an optimal error bound

1167 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 230
1166 Development of Methods for Plastic Injection Mold Weight Reduction

Authors: Bita Mohajernia, R. J. Urbanic

Abstract:

Mold making techniques have focused on meeting the customers’ functional and process requirements; however, today, molds are increasing in size and sophistication, and are difficult to manufacture, transport, and set up due to their size and mass. Presently, mold weight saving techniques focus on pockets to reduce the mass of the mold, but the overall size is still large, which introduces costs related to the stock material purchase, processing time for process planning, machining and validation, and excess waste materials. Reducing the overall size of the mold is desirable for many reasons, but the functional requirements, tool life, and durability cannot be compromised in the process. It is proposed to use Finite Element Analysis simulation tools to model the forces, and pressures to determine where the material can be removed. The potential results of this project will reduce manufacturing costs. In this study, a light weight structure is defined by an optimal distribution of material to carry external loads. The optimization objective of this research is to determine methods to provide the optimum layout for the mold structure. The topology optimization method is utilized to improve structural stiffness while decreasing the weight using the OptiStruct software. The optimized CAD model is compared with the primary geometry of the mold from the NX software. Results of optimization show an 8% weight reduction while the actual performance of the optimized structure, validated by physical testing, is similar to the original structure.

Keywords: finite element analysis, plastic injection molding, topology optimization, weight reduction

Procedia PDF Downloads 280
1165 Reliability-Based Maintenance Management Methodology to Minimise Life Cycle Cost of Water Supply Networks

Authors: Mojtaba Mahmoodian, Joshua Phelan, Mehdi Shahparvari

Abstract:

With a large percentage of countries’ total infrastructure expenditure attributed to water network maintenance, it is essential to optimise maintenance strategies to rehabilitate or replace underground pipes before failure occurs. The aim of this paper is to provide water utility managers with a maintenance management approach for underground water pipes, subject to external loading and material corrosion, to give the lowest life cycle cost over a predetermined time period. This reliability-based maintenance management methodology details the optimal years for intervention, the ideal number of maintenance activities to perform before replacement and specifies feasible renewal options and intervention prioritisation to minimise the life cycle cost. The study was then extended to include feasible renewal methods by determining the structural condition index and potential for soil loss, then obtaining the failure impact rating to assist in prioritising pipe replacement. A case study on optimisation of maintenance plans for the Melbourne water pipe network is considered in this paper to evaluate the practicality of the proposed methodology. The results confirm that the suggested methodology can provide water utility managers with a reliable systematic approach to determining optimum maintenance plans for pipe networks.

Keywords: water pipe networks, maintenance management, reliability analysis, optimum maintenance plan

Procedia PDF Downloads 142
1164 Technical and Economic Analysis of Smart Micro-Grid Renewable Energy Systems: An Applicable Case Study

Authors: M. A. Fouad, M. A. Badr, Z. S. Abd El-Rehim, Taher Halawa, Mahmoud Bayoumi, M. M. Ibrahim

Abstract:

Renewable energy-based micro-grids are presently attracting significant consideration. The smart grid system is presently considered a reliable solution for the expected deficiency in the power required from future power systems. The purpose of this study is to determine the optimal components sizes of a micro-grid, investigating technical and economic performance with the environmental impacts. The micro grid load is divided into two small factories with electricity, both on-grid and off-grid modes are considered. The micro-grid includes photovoltaic cells, back-up diesel generator wind turbines, and battery bank. The estimated load pattern is 76 kW peak. The system is modeled and simulated by MATLAB/Simulink tool to identify the technical issues based on renewable power generation units. To evaluate system economy, two criteria are used: the net present cost and the cost of generated electricity. The most feasible system components for the selected application are obtained, based on required parameters, using HOMER simulation package. The results showed that a Wind/Photovoltaic (W/PV) on-grid system is more economical than a Wind/Photovoltaic/Diesel/Battery (W/PV/D/B) off-grid system as the cost of generated electricity (COE) is 0.266 $/kWh and 0.316 $/kWh, respectively. Considering the cost of carbon dioxide emissions, the off-grid will be competitive to the on-grid system as COE is found to be (0.256 $/kWh, 0.266 $/kWh), for on and off grid systems.

Keywords: renewable energy sources, micro-grid system, modeling and simulation, on/off grid system, environmental impacts

Procedia PDF Downloads 256
1163 Solving the Economic Load Dispatch Problem Using Differential Evolution

Authors: Alaa Sheta

Abstract:

Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.

Keywords: economic load dispatch, power systems, optimization, differential evolution

Procedia PDF Downloads 273
1162 Computer-Aided Ship Design Approach for Non-Uniform Rational Basis Spline Based Ship Hull Surface Geometry

Authors: Anu S. Nair, V. Anantha Subramanian

Abstract:

This paper presents a surface development and fairing technique combining the features of a modern computer-aided design tool namely the Non-Uniform Rational Basis Spline (NURBS) with an algorithm to obtain a rapidly faired hull form. Some of the older series based designs give sectional area distribution such as in the Wageningen-Lap Series. Others such as the FORMDATA give more comprehensive offset data points. Nevertheless, this basic data still requires fairing to obtain an acceptable faired hull form. This method uses the input of sectional area distribution as an example and arrives at the faired form. Characteristic section shapes define any general ship hull form in the entrance, parallel mid-body and run regions. The method defines a minimum of control points at each section and using the Golden search method or the bisection method; the section shape converges to the one with the prescribed sectional area with a minimized error in the area fit. The section shapes combine into evolving the faired surface by NURBS and typically takes 20 iterations. The advantage of the method is that it is fast, robust and evolves the faired hull form through minimal iterations. The curvature criterion check for the hull lines shows the evolution of the smooth faired surface. The method is applicable to hull form from any parent series and the evolved form can be evaluated for hydrodynamic performance as is done in more modern design practice. The method can handle complex shape such as that of the bulbous bow. Surface patches developed fit together at their common boundaries with curvature continuity and fairness check. The development is coded in MATLAB and the example illustrates the development of the method. The most important advantage is quick time, the rapid iterative fairing of the hull form.

Keywords: computer-aided design, methodical series, NURBS, ship design

Procedia PDF Downloads 159
1161 Tensile Retention Properties of Thermoplastic Starch Based Biocomposites Modified with Glutaraldehyde

Authors: Jen-Taut Yeh, Yuan-jing Hou, Li Cheng, Ya Zhou Wang, Zhi Yu Zhang

Abstract:

Tensile retention properties of bacterial cellulose (BC) reinforced thermoplastic starch (TPS) resins were successfully improved by reacting with glutaraldehyde (GA) in their gelatinization processes. Small amounts of poly (lactic acid) (PLA) were blended with GA modified TPS resins to improve their processability. As evidenced by the newly developed ether (-C-O-C-) stretching bands on FT-IR spectra of TPS100BC0.02GAx series specimens, hydroxyl groups of TPS100BC0.02 resins were successfully reacted with the aldehyde groups of GA molecules during their modification processes. The retention values of tensile strengths (σf) of TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens improved significantly and reached a maximal value as GA contents approached an optimal value at 0.5 part per hundred parts of TPS resin (PHR). By addition of 0.5 PHR GA in biocomposite specimens, the initial tensile strength and elongation at break values of (TPS100BC0.02GA0.5)75PLA25 specimen improved to 24.6 MPa and 5.6%, respectively, which were slightly improved than those of (TPS100BC0.02)75PLA25 specimen. However, the retention values of tensile strengths of (TPS100BC0.02GA0.5)75PLA25 specimen reached around 82.5%, after placing the specimen under 20oC/50% relative humidity for 56 days, which were significantly better than those of the (TPS100BC0.02)75PLA25 specimen. In order to understand these interesting tensile retention properties found for (TPS100BC0.02GAx)75PLA25 specimens. Thermal analyses of initial and aged TPS100BC0.02, TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens were also performed in this investigation. Possible reasons accounting for the significantly improved tensile retention properties of TPS100BC0.02GAx and (TPS100BC0.02GAx)75PLA25 specimens are proposed.

Keywords: biocomposite, strength retention, thermoplastic starch, tensile retention

Procedia PDF Downloads 358
1160 Load Balancing Technique for Energy - Efficiency in Cloud Computing

Authors: Rani Danavath, V. B. Narsimha

Abstract:

Cloud computing is emerging as a new paradigm of large scale distributed computing. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., three service models, and four deployment networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This cloud model is composed of five essential characteristics models. Load balancing is one of the main challenges in cloud computing, which is required to distribute the dynamic workload across multiple nodes, to ensure that no single node is overloaded. It helps in optimal utilization of resources, enhancing the performance of the system. The goal of the load balancing is to minimize the resource consumption and carbon emission rate, that is the direct need of cloud computing. This determined the need of new metrics energy consumption and carbon emission for energy-efficiency load balancing techniques in cloud computing. Existing load balancing techniques mainly focuses on reducing overhead, services, response time and improving performance etc. In this paper we introduced a Technique for energy-efficiency, but none of the techniques have considered the energy consumption and carbon emission. Therefore, our proposed work will go towards energy – efficiency. So this energy-efficiency load balancing technique can be used to improve the performance of cloud computing by balancing the workload across all the nodes in the cloud with the minimum resource utilization, in turn, reducing energy consumption, and carbon emission to an extent, which will help to achieve green computing.

Keywords: cloud computing, distributed computing, energy efficiency, green computing, load balancing, energy consumption, carbon emission

Procedia PDF Downloads 435
1159 Effect of Acids with Different Chain Lengths Modified by Methane Sulfonic Acid and Temperature on the Properties of Thermoplastic Starch/Glycerin Blends

Authors: Chi-Yuan Huang, Mei-Chuan Kuo, Ching-Yi Hsiao

Abstract:

In this study, acids with various chain lengths (C6, C8, C10 and C12) modified by methane sulfonic acid (MSA) and temperature were used to modify tapioca starch (TPS), then the glycerol (GA) were added into modified starch, to prepare new blends. The mechanical properties, thermal properties and physical properties of blends were studied. This investigation was divided into two parts.  First, the biodegradable materials were used such as starch and glycerol with hexanedioic acid (HA), suberic acid (SBA), sebacic acid (SA), decanedicarboxylic acid (DA) manufacturing with different temperatures (90, 110 and 130 °C). And then, the solution was added into modified starch to prepare the blends by using single-screw extruder. The FT-IR patterns indicated that the characteristic peak of C=O in ester was observed at 1730 cm-1. It is proved that different chain length acids (C6, C8, C10 and C12) reacted with glycerol by esterification and these are used to plasticize blends during extrusion. In addition, the blends would improve the hydrolysis and thermal stability. The water contact angle increased from 43.0° to 64.0°.  Second, the HA (110 °C), SBA (110 °C), SA (110 °C), and DA blends (130 °C) were used in study, because they possessed good mechanical properties, water resistances and thermal stability. On the other hand, the various contents (0, 0.005, 0.010, 0.020 g) of MSA were also used to modify the mechanical properties of blends. We observed that the blends were added to MSA, and then the FT-IR patterns indicated that the C=O ester appeared at 1730 cm-1. For this reason, the hydrophobic blends were produced. The water contact angle of the MSA blends increased from 55.0° to 71.0°. Although break elongation of the MSA blends reduced from the original 220% to 128%, the stress increased from 2.5 MPa to 5.1 MPa. Therefore, the optimal composition of blends was the DA blend (130 °C) with adding of MSA (0.005 g).

Keywords: chain length acids, methane sulfonic acid, Tapioca starch (TPS), tensile stress

Procedia PDF Downloads 233
1158 Prediction of Fillet Weight and Fillet Yield from Body Measurements and Genetic Parameters in a Complete Diallel Cross of Three Nile Tilapia (Oreochromis niloticus) Strains

Authors: Kassaye Balkew Workagegn, Gunnar Klemetsdal, Hans Magnus Gjøen

Abstract:

In this study, the first objective was to investigate whether non-lethal or non-invasive methods, utilizing body measurements, could be used to efficiently predict fillet weight and fillet yield for a complete diallel cross of three Nile tilapia (Oreochromis niloticus) strains collected from three Ethiopian Rift Valley lakes, Lakes Ziway, Koka and Chamo. The second objective was to estimate heritability of body weight, actual and predicted fillet traits, as well as genetic correlations between these traits. A third goal was to estimate additive, reciprocal, and heterosis effects for body weight and the various fillet traits. As in females, early sexual maturation was widespread, only 958 male fish from 81 full-sib families were used, both for the prediction of fillet traits and in genetic analysis. The prediction equations from body measurements were established by forward regression analysis, choosing models with the least predicted residual error sums of squares (PRESS). The results revealed that body measurements on live Nile tilapia is well suited to predict fillet weight but not fillet yield (R²= 0.945 and 0.209, respectively), but both models were seemingly unbiased. The genetic analyses were carried out with bivariate, multibreed models. Body weight, fillet weight, and predicted fillet weight were all estimated with a heritability ranged from 0.23 to 0.28, and with genetic correlations close to one. Contrary, fillet yield was only to a minor degree heritable (0.05), while predicted fillet yield obtained a heritability of 0.19, being a resultant of two body weight variables known to have high heritability. The latter trait was estimated with genetic correlations to body weight and fillet weight traits larger than 0.82. No significant differences among strains were found for their additive genetic, reciprocal, or heterosis effects, while total heterosis effects were estimated as positive and significant (P < 0.05). As a conclusion, prediction of prediction of fillet weight based on body measurements is possible, but not for fillet yield.

Keywords: additive, fillet traits, genetic correlation, heritability, heterosis, prediction, reciprocal

Procedia PDF Downloads 164
1157 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation

Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar

Abstract:

Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.

Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source

Procedia PDF Downloads 117
1156 An Optimization Model for the Arrangement of Assembly Areas Considering Time Dynamic Area Requirements

Authors: Michael Zenker, Henrik Prinzhorn, Christian Böning, Tom Strating

Abstract:

Large-scale products are often assembled according to the job-site principle, meaning that during the assembly the product is located at a fixed position, while the area requirements are constantly changing. On one hand, the product itself is growing with each assembly step, whereas varying areas for storage, machines or working areas are temporarily required. This is an important factor when arranging products to be assembled within the factory. Currently, it is common to reserve a fixed area for each product to avoid overlaps or collisions with the other assemblies. Intending to be large enough to include the product and all adjacent areas, this reserved area corresponds to the superposition of the maximum extents of all required areas of the product. In this procedure, the reserved area is usually poorly utilized over the course of the entire assembly process; instead a large part of it remains unused. If the available area is a limited resource, a systematic arrangement of the products, which complies with the dynamic area requirements, will lead to an increased area utilization and productivity. This paper presents the results of a study on the arrangement of assembly objects assuming dynamic, competing area requirements. First, the problem situation is extensively explained, and existing research on associated topics is described and evaluated on the possibility of an adaptation. Then, a newly developed mathematical optimization model is introduced. This model allows an optimal arrangement of dynamic areas, considering logical and practical constraints. Finally, in order to quantify the potential of the developed method, some test series results are presented, showing the possible increase in area utilization.

Keywords: dynamic area requirements, facility layout problem, optimization model, product assembly

Procedia PDF Downloads 220
1155 A BIM-Based Approach to Assess COVID-19 Risk Management Regarding Indoor Air Ventilation and Pedestrian Dynamics

Authors: T. Delval, C. Sauvage, Q. Jullien, R. Viano, T. Diallo, B. Collignan, G. Picinbono

Abstract:

In the context of the international spread of COVID-19, the Centre Scientifique et Technique du Bâtiment (CSTB) has led a joint research with the French government authorities Hauts-de-Seine department, to analyse the risk in school spaces according to their configuration, ventilation system and spatial segmentation strategy. This paper describes the main results of this joint research. A multidisciplinary team involving experts in indoor air quality/ventilation, pedestrian movements and IT domains was established to develop a COVID risk analysis tool based on Building Information Model. The work started with specific analysis on two pilot schools in order to provide for the local administration specifications to minimize the spread of the virus. Different recommendations were published to optimize/validate the use of ventilation systems and the strategy of student occupancy and student flow segmentation within the building. This COVID expertise has been digitized in order to manage a quick risk analysis on the entire building that could be used by the public administration through an easy user interface implemented in a free BIM Management software. One of the most interesting results is to enable a dynamic comparison of different ventilation system scenarios and space occupation strategy inside the BIM model. This concurrent engineering approach provides users with the optimal solution according to both ventilation and pedestrian flow expertise.

Keywords: BIM, knowledge management, system expert, risk management, indoor ventilation, pedestrian movement, integrated design

Procedia PDF Downloads 97
1154 Audit of Post-Caesarean Section Analgesia

Authors: Rachel Ashwell, Sally Millett

Abstract:

Introduction: Adequate post-operative pain relief is a key priority in the delivery of caesarean sections. This improves patient experience, reduces morbidity and enables optimal mother-infant interaction. Recommendations outlined in the NICE guidelines for caesarean section (CS) include offering peri-operative intrathecal/epidural diamorphine and post-operative opioid analgesics; offering non-steroidal anti-inflammatory drugs (NSAIDs) unless contraindicated and taking hourly observations for 12 hours following intrathecal diamorphine. Method: This audit assessed the provision of post-CS analgesia in 29 women over a two-week period. Indicators used were the use of intrathecal/epidural opioids, use of post-operative opioids and NSAIDs, frequency of observations and patient satisfaction with pain management on post-operative days 1 and 2. Results: All women received intrathecal/epidural diamorphine, 97% were prescribed post-operative opioids and all were prescribed NSAIDs unless contraindicated. Hourly observations were not maintained for 12 hours following intrathecal diamorphine. 97% of women were satisfied with their pain management on post-operative day 1 whereas only 75% were satisfied on day 2. Discussion: This service meets the proposed standards for the provision of post-operative analgesia, achieving high levels of patient satisfaction 1 day after CS. However, patient satisfaction levels are significantly lower on post-operative day 2, which may be due to reduced frequency of observations. The lack of an official audit standard for patient satisfaction on postoperative day 2 may result in reduced incentive to prioritise pain management at this stage.

Keywords: Caesarean section, analgesia, postoperative care, patient satisfaction

Procedia PDF Downloads 376
1153 Dynamic Web-Based 2D Medical Image Visualization and Processing Software

Authors: Abdelhalim. N. Mohammed, Mohammed. Y. Esmail

Abstract:

In the course of recent decades, medical imaging has been dominated by the use of costly film media for review and archival of medical investigation, however due to developments in networks technologies and common acceptance of a standard digital imaging and communication in medicine (DICOM) another approach in light of World Wide Web was produced. Web technologies successfully used in telemedicine applications, the combination of web technologies together with DICOM used to design a web-based and open source DICOM viewer. The Web server allowance to inquiry and recovery of images and the images viewed/manipulated inside a Web browser without need for any preinstalling software. The dynamic site page for medical images visualization and processing created by using JavaScript and HTML5 advancements. The XAMPP ‘apache server’ is used to create a local web server for testing and deployment of the dynamic site. The web-based viewer connected to multiples devices through local area network (LAN) to distribute the images inside healthcare facilities. The system offers a few focal points over ordinary picture archiving and communication systems (PACS): easy to introduce, maintain and independently platforms that allow images to display and manipulated efficiently, the system also user-friendly and easy to integrate with an existing system that have already been making use of web technologies. The wavelet-based image compression technique on which 2-D discrete wavelet transform used to decompose the image then wavelet coefficients are transmitted by entropy encoding after threshold to decrease transmission time, stockpiling cost and capacity. The performance of compression was estimated by using images quality metrics such as mean square error ‘MSE’, peak signal to noise ratio ‘PSNR’ and compression ratio ‘CR’ that achieved (83.86%) when ‘coif3’ wavelet filter is used.

Keywords: DICOM, discrete wavelet transform, PACS, HIS, LAN

Procedia PDF Downloads 151
1152 Neural Network and Support Vector Machine for Prediction of Foot Disorders Based on Foot Analysis

Authors: Monireh Ahmadi Bani, Adel Khorramrouz, Lalenoor Morvarid, Bagheri Mahtab

Abstract:

Background:- Foot disorders are common in musculoskeletal problems. Plantar pressure distribution measurement is one the most important part of foot disorders diagnosis for quantitative analysis. However, the association of plantar pressure and foot disorders is not clear. With the growth of dataset and machine learning methods, the relationship between foot disorders and plantar pressures can be detected. Significance of the study:- The purpose of this study was to predict the probability of common foot disorders based on peak plantar pressure distribution and center of pressure during walking. Methodologies:- 2323 participants were assessed in a foot therapy clinic between 2015 and 2021. Foot disorders were diagnosed by an experienced physician and then they were asked to walk on a force plate scanner. After the data preprocessing, due to the difference in walking time and foot size, we normalized the samples based on time and foot size. Some of force plate variables were selected as input to a deep neural network (DNN), and the probability of any each foot disorder was measured. In next step, we used support vector machine (SVM) and run dataset for each foot disorder (classification of yes or no). We compared DNN and SVM for foot disorders prediction based on plantar pressure distributions and center of pressure. Findings:- The results demonstrated that the accuracy of deep learning architecture is sufficient for most clinical and research applications in the study population. In addition, the SVM approach has more accuracy for predictions, enabling applications for foot disorders diagnosis. The detection accuracy was 71% by the deep learning algorithm and 78% by the SVM algorithm. Moreover, when we worked with peak plantar pressure distribution, it was more accurate than center of pressure dataset. Conclusion:- Both algorithms- deep learning and SVM will help therapist and patients to improve the data pool and enhance foot disorders prediction with less expense and error after removing some restrictions properly.

Keywords: deep neural network, foot disorder, plantar pressure, support vector machine

Procedia PDF Downloads 332
1151 Does Pakistan Stock Exchange Offer Diversification Benefits to Regional and International Investors: A Time-Frequency (Wavelets) Analysis

Authors: Syed Jawad Hussain Shahzad, Muhammad Zakaria, Mobeen Ur Rehman, Saniya Khaild

Abstract:

This study examines the co-movement between the Pakistan, Indian, S&P 500 and Nikkei 225 stock markets using weekly data from 1998 to 2013. The time-frequency relationship between the selected stock markets is conducted by using measures of continuous wavelet power spectrum, cross-wavelet transform and cross (squared) wavelet coherency. The empirical evidence suggests strong dependence between Pakistan and Indian stock markets. The co-movement of Pakistani index with U.S and Japanese, the developed markets, varies over time and frequency where the long-run relationship is dominant. The results of cross wavelet and wavelet coherence analysis indicate moderate covariance and correlation between stock indexes and the markets are in phase (i.e. cyclical in nature) over varying durations. Pakistan stock market was lagging during the entire period in relation to Indian stock market, corresponding to the 8~32 and then 64~256 weeks scale. Similar findings are evident for S&P 500 and Nikkei 225 indexes, however, the relationship occurs during the later period of study. All three wavelet indicators suggest strong evidence of higher co-movement during 2008-09 global financial crises. The empirical analysis reveals a strong evidence that the portfolio diversification benefits vary across frequencies and time. This analysis is unique and have several practical implications for regional and international investors while assigning the optimal weightage of different assets in portfolio formulation.

Keywords: co-movement, Pakistan stock exchange, S&P 500, Nikkei 225, wavelet analysis

Procedia PDF Downloads 350
1150 Task Scheduling and Resource Allocation in Cloud-based on AHP Method

Authors: Zahra Ahmadi, Fazlollah Adibnia

Abstract:

Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).

Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow

Procedia PDF Downloads 135
1149 Sustainability of Photovoltaic Recycling Planning

Authors: Jun-Ki Choi

Abstract:

The usage of valuable resources and the potential for waste generation at the end of the life cycle of photovoltaic (PV) technologies necessitate a proactive planning for a PV recycling infrastructure. To ensure the sustainability of PV in large scales of deployment, it is vital to develop and institute low-cost recycling technologies and infrastructure for the emerging PV industry in parallel with the rapid commercialization of these new technologies. There are various issues involved in the economics of PV recycling and this research examine those at macro and micro levels, developing a holistic interpretation of the economic viability of the PV recycling systems. This study developed mathematical models to analyze the profitability of recycling technologies and to guide tactical decisions for allocating optimal location of PV take-back centers (PVTBC), necessary for the collection of end of life products. The economic decision is usually based on the level of the marginal capital cost of each PVTBC, cost of reverse logistics, distance traveled, and the amount of PV waste collected from various locations. Results illustrated that the reverse logistics costs comprise a major portion of the cost of PVTBC; PV recycling centers can be constructed in the optimally selected locations to minimize the total reverse logistics cost for transporting the PV wastes from various collection facilities to the recycling center. In the micro- process level, automated recycling processes should be developed to handle the large amount of growing PV wastes economically. The market price of the reclaimed materials are important factors for deciding the profitability of the recycling process and this illustrates the importance of the recovering the glass and expensive metals from PV modules.

Keywords: photovoltaic, recycling, mathematical models, sustainability

Procedia PDF Downloads 243
1148 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 578
1147 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action

Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere

Abstract:

Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.

Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results

Procedia PDF Downloads 124
1146 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 72
1145 The Effect of Micro/Nano Structure of Poly (ε-caprolactone) (PCL) Film Using a Two-Step Process (Casting/Plasma) on Cellular Responses

Authors: JaeYoon Lee, Gi-Hoon Yang, JongHan Ha, MyungGu Yeo, SeungHyun Ahn, Hyeongjin Lee, HoJun Jeon, YongBok Kim, Minseong Kim, GeunHyung Kim

Abstract:

One of the important factors in tissue engineering is to design optimal biomedical scaffolds, which can be governed by topographical surface characteristics, such as size, shape, and direction. Of these properties, we focused on the effects of nano- to micro-sized hierarchical surface. To fabricate the hierarchical surface structure on poly(ε-caprolactone) (PCL) film, we employed a micro-casting technique by pressing the mold and nano-etching technique using a modified plasma process. The micro-sized topography of PCL film was controlled by sizes of the micro structures on lotus leaf. Also, the nano-sized topography and hydrophilicity of PCL film were controlled by a modified plasma process. After the plasma treatment, the hydrophobic property of the PCL film was significantly changed into hydrophilic property, and the nano-sized structure was well developed. The surface properties of the modified PCL film were investigated in terms of initial cell morphology, attachment, and proliferation using osteoblast-like-cells (MG63). In particular, initial cell attachment, proliferation and osteogenic differentiation in the hierarchical structure were enhanced dramatically compared to those of the smooth surface. We believe that these results are because of a synergistic effect between the hierarchical structure and the reactive functional groups due to the plasma process. Based on the results presented here, we propose a new biomimetic surface model that maybe useful for effectively regenerating hard tissues.

Keywords: hierarchical surface, lotus leaf, nano-etching, plasma treatment

Procedia PDF Downloads 365
1144 An Intelligent Transportation System for Safety and Integrated Management of Railway Crossings

Authors: M. Magrini, D. Moroni, G. Palazzese, G. Pieri, D. Azzarelli, A. Spada, L. Fanucci, O. Salvetti

Abstract:

Railway crossings are complex entities whose optimal management cannot be addressed unless with the help of an intelligent transportation system integrating information both on train and vehicular flows. In this paper, we propose an integrated system named SIMPLE (Railway Safety and Infrastructure for Mobility applied at level crossings) that, while providing unparalleled safety in railway level crossings, collects data on rail and road traffic and provides value-added services to citizens and commuters. Such services include for example alerts, via variable message signs to drivers and suggestions for alternative routes, towards a more sustainable, eco-friendly and efficient urban mobility. To achieve these goals, SIMPLE is organized as a System of Systems (SoS), with a modular architecture whose components range from specially-designed radar sensors for obstacle detection to smart ETSI M2M-compliant camera networks for urban traffic monitoring. Computational unit for performing forecast according to adaptive models of train and vehicular traffic are also included. The proposed system has been tested and validated during an extensive trial held in the mid-sized Italian town of Montecatini, a paradigmatic case where the rail network is inextricably linked with the fabric of the city. Results of the tests are reported and discussed.

Keywords: Intelligent Transportation Systems (ITS), railway, railroad crossing, smart camera networks, radar obstacle detection, real-time traffic optimization, IoT, ETSI M2M, transport safety

Procedia PDF Downloads 488
1143 Designing Web Application to Simulate Agricultural Management for Smart Farmer: Land Development Department’s Integrated Management Farm

Authors: Panasbodee Thachaopas, Duangdorm Gamnerdsap, Waraporn Inthip, Arissara Pungpa

Abstract:

LDD’s IM Farm or Land Development Department’s Integrated Management Farm is the agricultural simulation application developed by Land Development Department relies on actual data in simulation game to grow 12 cash crops which are rice, corn, cassava, sugarcane, soybean, rubber tree, oil palm, pineapple, longan, rambutan, durian, and mangosteen. Launching in simulation game, players could select preferable areas for cropping from base map or Orthophoto map scale 1:4,000. Farm management is simulated from field preparation to harvesting. The system uses soil group, and present land use database to facilitate player to know whether what kind of crop is suitable to grow in each soil groups and integrate LDD’s data with other agencies which are soil types, soil properties, soil problems, climate, cultivation cost, fertilizer use, fertilizer price, socio-economic data, plant diseases, weed, pest, interest rate for taking on loan from Bank for Agriculture and Agricultural Cooperatives (BAAC), labor cost, market prices. These mentioned data affect the cost and yield differently to each crop. After completing, the player will know the yield, income and expense, profit/loss. The player could change to other crops that are more suitable to soil groups for optimal yields and profits.

Keywords: agricultural simulation, smart farmer, web application, factors of agricultural production

Procedia PDF Downloads 191
1142 hsa-miR-1204 and hsa-miR-639 Prominent Role in Tamoxifen's Molecular Mechanisms on the EMT Phenomenon in Breast Cancer Patients

Authors: Mahsa Taghavi

Abstract:

In the treatment of breast cancer, tamoxifen is a regularly prescribed medication. The effect of tamoxifen on breast cancer patients' EMT pathways was studied. In this study to see if it had any effect on the cancer cells' resistance to tamoxifen and to look for specific miRNAs associated with EMT. In this work, we used continuous and integrated bioinformatics analysis to choose the optimal GEO datasets. Once we had sorted the gene expression profile, we looked at the mechanism of signaling, the ontology of genes, and the protein interaction of each gene. In the end, we used the GEPIA database to confirm the candidate genes. after that, I investigated critical miRNAs related to candidate genes. There were two gene expression profiles that were categorized into two distinct groups. Using the expression profile of genes that were lowered in the EMT pathway, the first group was examined. The second group represented the polar opposite of the first. A total of 253 genes from the first group and 302 genes from the second group were found to be common. Several genes in the first category were linked to cell death, focal adhesion, and cellular aging. Two genes in the second group were linked to cell death, focal adhesion, and cellular aging. distinct cell cycle stages were observed. Finally, proteins such as MYLK, SOCS3, and STAT5B from the first group and BIRC5, PLK1, and RAPGAP1 from the second group were selected as potential candidates linked to tamoxifen's influence on the EMT pathway. hsa-miR-1204 and hsa-miR-639 have a very close relationship with the candidates genes according to the node degrees and betweenness index. With this, the action of tamoxifen on the EMT pathway was better understood. It's important to learn more about how tamoxifen's target genes and proteins work so that we can better understand the drug.

Keywords: tamoxifen, breast cancer, bioinformatics analysis, EMT, miRNAs

Procedia PDF Downloads 121
1141 Expert System for Road Bridge Constructions

Authors: Michael Dimmer, Holger Flederer

Abstract:

The basis of realizing a construction project is a technically flawless concept which satisfies conditions regarding environment and costs, as well as static-constructional terms. The presented software system actively supports civil engineers during the setup of optimal designs, by giving advice regarding durability, life-cycle costs, sustainability and much more. A major part of the surrounding conditions of a design process is gathered and assimilated by experienced engineers subconsciously. It is a question about eligible building techniques and their practicability by considering emerging costs. Planning engineers have acquired many of this experience during their professional life and use them for their daily work. Occasionally, the planning engineer should disassociate himself from his experience to be open for new and better solutions which meet the functional demands, as well. The developed expert system gives planning engineers recommendations for preferred design options of new constructions as well as for existing bridge constructions. It is possible to analyze construction elements and techniques regarding sustainability and life-cycle costs. This way the software provides recommendations for future constructions. Furthermore, there is an option to design existing road bridges especially for heavy duty transport. This implies a route planning tool to get quick and reliable information as to whether the bridge support structures of a transport route have been measured sufficiently for a certain heavy duty transport. The use of this expert system in bridge planning companies and building authorities will save costs massively for new and existent bridge constructions. This is achieved by consequently considering parameters like life-cycle costs and sustainability for its planning recommendations.

Keywords: expert system, planning process, road bridges, software system

Procedia PDF Downloads 268
1140 Cryptocurrency as a Payment Method in the Tourism Industry: A Comparison of Volatility, Correlation and Portfolio Performance

Authors: Shu-Han Hsu, Jiho Yoon, Chwen Sheu

Abstract:

With the rapidly growing of blockchain technology and cryptocurrency, various industries which include tourism has added in cryptocurrency as the payment method of their transaction. More and more tourism companies accept payments in digital currency for flights, hotel reservations, transportation, and more. For travellers and tourists, using cryptocurrency as a payment method has become a way to circumvent costs and prevent risks. Understanding volatility dynamics and interdependencies between standard currency and cryptocurrency is important for appropriate financial risk management to assist policy-makers and investors in marking more informed decisions. The purpose of this paper has been to understand and explain the risk spillover effects between six major cryptocurrencies and the top ten most traded standard currencies. Using data for the daily closing price of cryptocurrencies and currency exchange rates from 7 August 2015 to 10 December 2019, with 1,133 observations. The diagonal BEKK model was used to analyze the co-volatility spillover effects between cryptocurrency returns and exchange rate returns, which are measures of how the shocks to returns in different assets affect each other’s subsequent volatility. The empirical results show there are co-volatility spillover effects between the cryptocurrency returns and GBP/USD, CNY/USD and MXN/USD exchange rate returns. Therefore, currencies (British Pound, Chinese Yuan and Mexican Peso) and cryptocurrencies (Bitcoin, Ethereum, Ripple, Tether, Litecoin and Stellar) are suitable for constructing a financial portfolio from an optimal risk management perspective and also for dynamic hedging purposes.

Keywords: blockchain, co-volatility effects, cryptocurrencies, diagonal BEKK model, exchange rates, risk spillovers

Procedia PDF Downloads 132
1139 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 269
1138 Optimization of Smart Beta Allocation by Momentum Exposure

Authors: J. B. Frisch, D. Evandiloff, P. Martin, N. Ouizille, F. Pires

Abstract:

Smart Beta strategies intend to be an asset management revolution with reference to classical cap-weighted indices. Indeed, these strategies allow a better control on portfolios risk factors and an optimized asset allocation by taking into account specific risks or wishes to generate alpha by outperforming indices called 'Beta'. Among many strategies independently used, this paper focuses on four of them: Minimum Variance Portfolio, Equal Risk Contribution Portfolio, Maximum Diversification Portfolio, and Equal-Weighted Portfolio. Their efficiency has been proven under constraints like momentum or market phenomenon, suggesting a reconsideration of cap-weighting.
 To further increase strategy return efficiency, it is proposed here to compare their strengths and weaknesses inside time intervals corresponding to specific identifiable market phases, in order to define adapted strategies depending on pre-specified situations. 
Results are presented as performance curves from different combinations compared to a benchmark. If a combination outperforms the applicable benchmark in well-defined actual market conditions, it will be preferred. It is mainly shown that such investment 'rules', based on both historical data and evolution of Smart Beta strategies, and implemented according to available specific market data, are providing very interesting optimal results with higher return performance and lower risk.
 Such combinations have not been fully exploited yet and justify present approach aimed at identifying relevant elements characterizing them.

Keywords: smart beta, minimum variance portfolio, equal risk contribution portfolio, maximum diversification portfolio, equal weighted portfolio, combinations

Procedia PDF Downloads 328