Search results for: hybrid forecasting models
5157 Peril´s Environment of Energetic Infrastructure Complex System, Modelling by the Crisis Situation Algorithms
Authors: Jiří F. Urbánek, Alena Oulehlová, Hana Malachová, Jiří J. Urbánek Jr.
Abstract:
Crisis situations investigation and modelling are introduced and made within the complex system of energetic critical infrastructure, operating on peril´s environments. Every crisis situations and perils has an origin in the emergency/ crisis event occurrence and they need critical/ crisis interfaces assessment. Here, the emergency events can be expected - then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping; or it may be unexpected - without pre-prepared scenario of event. But the both need operational coping by means of crisis management as well. The operation, forms, characteristics, behaviour and utilization of crisis management have various qualities, depending on real critical infrastructure organization perils, and prevention training processes. An aim is always - better security and continuity of the organization, which successful obtainment needs to find and investigate critical/ crisis zones and functions in critical infrastructure organization models, operating in pertinent perils environment. Our DYVELOP (Dynamic Vector Logistics of Processes) method is disposables for it. Here, it is necessary to derive and create identification algorithm of critical/ crisis interfaces. The locations of critical/ crisis interfaces are the flags of crisis situation in organization of critical infrastructure models. Then, the model of crisis situation will be displayed at real organization of Czech energetic crisis infrastructure subject in real peril environment. These efficient measures are necessary for the infrastructure protection. They will be derived for peril mitigation, crisis situation coping and for environmentally friendly organization survival, continuity and its sustainable development advanced possibilities.Keywords: algorithms, energetic infrastructure complex system, modelling, peril´s environment
Procedia PDF Downloads 4025156 Comprehensive Machine Learning-Based Glucose Sensing from Near-Infrared Spectra
Authors: Bitewulign Mekonnen
Abstract:
Context: This scientific paper focuses on the use of near-infrared (NIR) spectroscopy to determine glucose concentration in aqueous solutions accurately and rapidly. The study compares six different machine learning methods for predicting glucose concentration and also explores the development of a deep learning model for classifying NIR spectra. The objective is to optimize the detection model and improve the accuracy of glucose prediction. This research is important because it provides a comprehensive analysis of various machine-learning techniques for estimating aqueous glucose concentrations. Research Aim: The aim of this study is to compare and evaluate different machine-learning methods for predicting glucose concentration from NIR spectra. Additionally, the study aims to develop and assess a deep-learning model for classifying NIR spectra. Methodology: The research methodology involves the use of machine learning and deep learning techniques. Six machine learning regression models, including support vector machine regression, partial least squares regression, extra tree regression, random forest regression, extreme gradient boosting, and principal component analysis-neural network, are employed to predict glucose concentration. The NIR spectra data is randomly divided into train and test sets, and the process is repeated ten times to increase generalization ability. In addition, a convolutional neural network is developed for classifying NIR spectra. Findings: The study reveals that the SVMR, ETR, and PCA-NN models exhibit excellent performance in predicting glucose concentration, with correlation coefficients (R) > 0.99 and determination coefficients (R²)> 0.985. The deep learning model achieves high macro-averaging scores for precision, recall, and F1-measure. These findings demonstrate the effectiveness of machine learning and deep learning methods in optimizing the detection model and improving glucose prediction accuracy. Theoretical Importance: This research contributes to the field by providing a comprehensive analysis of various machine-learning techniques for estimating glucose concentrations from NIR spectra. It also explores the use of deep learning for the classification of indistinguishable NIR spectra. The findings highlight the potential of machine learning and deep learning in enhancing the prediction accuracy of glucose-relevant features. Data Collection and Analysis Procedures: The NIR spectra and corresponding references for glucose concentration are measured in increments of 20 mg/dl. The data is randomly divided into train and test sets, and the models are evaluated using regression analysis and classification metrics. The performance of each model is assessed based on correlation coefficients, determination coefficients, precision, recall, and F1-measure. Question Addressed: The study addresses the question of whether machine learning and deep learning methods can optimize the detection model and improve the accuracy of glucose prediction from NIR spectra. Conclusion: The research demonstrates that machine learning and deep learning methods can effectively predict glucose concentration from NIR spectra. The SVMR, ETR, and PCA-NN models exhibit superior performance, while the deep learning model achieves high classification scores. These findings suggest that machine learning and deep learning techniques can be used to improve the prediction accuracy of glucose-relevant features. Further research is needed to explore their clinical utility in analyzing complex matrices, such as blood glucose levels.Keywords: machine learning, signal processing, near-infrared spectroscopy, support vector machine, neural network
Procedia PDF Downloads 945155 Evaluation of the Effect of Turbulence Caused by the Oscillation Grid on Oil Spill in Water Column
Authors: Mohammad Ghiasvand, Babak Khorsandi, Morteza Kolahdoozan
Abstract:
Under the influence of waves, oil in the sea is subject to vertical scattering in the water column. Scientists' knowledge of how oil is dispersed in the water column is one of the lowest levels of knowledge among other processes affecting oil in the marine environment, which highlights the need for research and study in this field. Therefore, this study investigates the distribution of oil in the water column in a turbulent environment with zero velocity characteristics. Lack of laboratory results to analyze the distribution of petroleum pollutants in deep water for information Phenomenon physics on the one hand and using them to calibrate numerical models on the other hand led to the development of laboratory models in research. According to the aim of the present study, which is to investigate the distribution of oil in homogeneous and isotropic turbulence caused by the oscillating Grid, after reaching the ideal conditions, the crude oil flow was poured onto the water surface and oil was distributed in deep water due to turbulence was investigated. In this study, all experimental processes have been implemented and used for the first time in Iran, and the study of oil diffusion in the water column was considered one of the key aspects of pollutant diffusion in the oscillating Grid environment. Finally, the required oscillation velocities were taken at depths of 10, 15, 20, and 25 cm from the water surface and used in the analysis of oil diffusion due to turbulence parameters. The results showed that with the characteristics of the present system in two static modes and network motion with a frequency of 0.8 Hz, the results of oil diffusion in the four mentioned depths at a frequency of 0.8 Hz compared to the static mode from top to bottom at 26.18, 57 31.5, 37.5 and 50% more. Also, after 2.5 minutes of the oil spill at a frequency of 0.8 Hz, oil distribution at the mentioned depths increased by 49, 61.5, 85, and 146.1%, respectively, compared to the base (static) state.Keywords: homogeneous and isotropic turbulence, oil distribution, oscillating grid, oil spill
Procedia PDF Downloads 755154 Lateral Torsional Buckling Investigation on Welded Q460GJ Structural Steel Unrestrained Beams under a Point Load
Authors: Yue Zhang, Bo Yang, Gang Xiong, Mohamed Elchalakanic, Shidong Nie
Abstract:
This study aims to investigate the lateral torsional buckling of I-shaped cross-section beams fabricated from Q460GJ structural steel plates. Both experimental and numerical simulation results are presented in this paper. A total of eight specimens were tested under a three-point bending, and the corresponding numerical models were established to conduct parametric studies. The effects of some key parameters such as the non-dimensional member slenderness and the height-to-width ratio, were investigated based on the verified numerical models. Also, the results obtained from the parametric studies were compared with the predictions calculated by different design codes including the Chinese design code (GB50017-2003, 2003), the new draft version of Chinese design code (GB50017-201X, 2012), Eurocode 3 (EC3, 2005) and the North America design code (ANSI/AISC360-10, 2010). These comparisons indicated that the sectional height-to-width ratio does not play an important role to influence the overall stability load-carrying capacity of Q460GJ structural steel beams with welded I-shaped cross-sections. It was also found that the design methods in GB50017-2003 and ANSI/AISC360-10 overestimate the overall stability and load-carrying capacity of Q460GJ welded I-shaped cross-section beams.Keywords: experimental study, finite element analysis, global stability, lateral torsional buckling, Q460GJ structural steel
Procedia PDF Downloads 3285153 The Electric Car Wheel Hub Motor Work Analysis with the Use of 2D FEM Electromagnetic Method and 3D CFD Thermal Simulations
Authors: Piotr Dukalski, Bartlomiej Bedkowski, Tomasz Jarek, Tomasz Wolnik
Abstract:
The article is concerned with the design of an electric in wheel hub motor installed in an electric car with two-wheel drive. It presents the construction of the motor on the 3D cross-section model. Work simulation of the motor (applicated to Fiat Panda car) and selected driving parameters such as driving on the road with a slope of 20%, driving at maximum speed, maximum acceleration of the car from 0 to 100 km/h are considered by the authors in the article. The demand for the drive power taking into account the resistance to movement was determined for selected driving conditions. The parameters of the motor operation and the power losses in its individual elements, calculated using the FEM 2D method, are presented for the selected car driving parameters. The calculated power losses are used in 3D models for thermal calculations using the CFD method. Detailed construction of thermal models with materials data, boundary conditions and losses calculated using the FEM 2D method are presented in the article. The article presents and describes calculated temperature distributions in individual motor components such as winding, permanent magnets, magnetic core, body, cooling system components. Generated losses in individual motor components and their impact on the limitation of its operating parameters are described by authors. Attention is paid to the losses generated in permanent magnets, which are a source of heat as the removal of which from inside the motor is difficult. Presented results of calculations show how individual motor power losses, generated in different load conditions while driving, affect its thermal state.Keywords: electric car, electric drive, electric motor, thermal calculations, wheel hub motor
Procedia PDF Downloads 1755152 An Experimental (Wind Tunnel) and Numerical (CFD) Study on the Flow over Hills
Authors: Tanit Daniel Jodar Vecina, Adriane Prisco Petry
Abstract:
The shape of the wind velocity profile changes according to local features of terrain shape and roughness, which are parameters responsible for defining the Atmospheric Boundary Layer (ABL) profile. Air flow characteristics over and around landforms, such as hills, are of considerable importance for applications related to Wind Farm and Turbine Engineering. The air flow is accelerated on top of hills, which can represent a decisive factor for Wind Turbine placement choices. The present work focuses on the study of ABL behavior as a function of slope and surface roughness of hill-shaped landforms, using the Computational Fluid Dynamics (CFD) to build wind velocity and turbulent intensity profiles. Reynolds-Averaged Navier-Stokes (RANS) equations are closed using the SST k-ω turbulence model; numerical results are compared to experimental data measured in wind tunnel over scale models of the hills under consideration. Eight hill models with slopes varying from 25° to 68° were tested for two types of terrain categories in 2D and 3D, and two analytical codes are used to represent the inlet velocity profiles. Numerical results for the velocity profiles show differences under 4% when compared to their respective experimental data. Turbulent intensity profiles show maximum differences around 7% when compared to experimental data; this can be explained by not being possible to insert inlet turbulent intensity profiles in the simulations. Alternatively, constant values based on the averages of the turbulent intensity at the wind tunnel inlet were used.Keywords: Atmospheric Boundary Layer, Computational Fluid Dynamic (CFD), Numerical Modeling, Wind Tunnel
Procedia PDF Downloads 3805151 Comparison of Methods for Detecting and Quantifying Amplitude Modulation of Wind Farm Noise
Authors: Phuc D. Nguyen, Kristy L. Hansen, Branko Zajamsek
Abstract:
The existence of special characteristics of wind farm noise such as amplitude modulation (AM) contributes significantly to annoyance, which could ultimately result in sleep disturbance and other adverse health effects for residents living near wind farms. In order to detect and quantify this phenomenon, several methods have been developed which can be separated into three types: time-domain, frequency-domain and hybrid methods. However, due to a lack of systematic validation of these methods, it is still difficult to select the best method for identifying AM. Furthermore, previous comparisons between AM methods have been predominantly qualitative or based on synthesised signals, which are not representative of the actual noise. In this study, a comparison between methods for detecting and quantifying AM has been carried out. The results are based on analysis of real noise data which were measured at a wind farm in South Australia. In order to evaluate the performance of these methods in terms of detecting AM, an approach has been developed to select the most successful method of AM detection. This approach uses a receiver operating characteristic (ROC) curve which is based on detection of AM in audio files by experts.Keywords: amplitude modulation, wind farm noise, ROC curve
Procedia PDF Downloads 1455150 5G Future Hyper-Dense Networks: An Empirical Study and Standardization Challenges
Authors: W. Hashim, H. Burok, N. Ghazaly, H. Ahmad Nasir, N. Mohamad Anas, A. F. Ismail, K. L. Yau
Abstract:
Future communication networks require devices that are able to work on a single platform but support heterogeneous operations which lead to service diversity and functional flexibility. This paper proposes two cognitive mechanisms termed cognitive hybrid function which is applied in multiple broadband user terminals in order to maintain reliable connectivity and preventing unnecessary interferences. By employing such mechanisms especially for future hyper-dense network, we can observe their performances in terms of optimized speed and power saving efficiency. Results were obtained from several empirical laboratory studies. It was found that selecting reliable network had shown a better optimized speed performance up to 37% improvement as compared without such function. In terms of power adjustment, our evaluation of this mechanism can reduce the power to 5dB while maintaining the same level of throughput at higher power performance. We also discuss the issues impacting future telecommunication standards whenever such devices get in place.Keywords: dense network, intelligent network selection, multiple networks, transmit power adjustment
Procedia PDF Downloads 3765149 Enhancing ERP Implementation Processes in South African Retail SMEs: A Study on Operational Efficiency and Customer-Centric Approaches
Authors: Tshepo Mabotja
Abstract:
Purpose: The purpose of this study is to identify and analyse the factors influencing ERP implementation in South African SMEs in the textile & apparel retail sector, with the goal of providing insights that improve decision-making, enhance operational efficiency, and meet customer expectations. Design/Methodology/Approach: A quantitative research methodology was employed, utilising a probability (random) sampling technique to ensure equal opportunity for sample selection. The researcher conducted an extensive review of current literature to identify knowledge gaps and applied data analysis methods, including descriptive statistics, reliability tests, exploratory factor analysis, and normality testing. Findings/Results: The study revealed that South African SMEs in the textile & apparel retail industry must evaluate critical factors before implementing an ERP model. These factors include assessing client requirements, examining the experiences of existing ERP system users, understanding system maintenance needs, and forecasting expected performance outcomes. Practical Implications: The findings provide actionable recommendations for textile and apparel retail SMEs aiming to adopt ERP systems. By focusing on the identified critical factors, businesses can enhance their ERP adoption processes, reduce operational inefficiencies, and better align with customer and sustainability demands. Originality/Value: This study contributes to the limited body of knowledge on ERP implementation challenges in South African textile and apparel retail SMEs. It provides a unique perspective on how strategic ERP adoption can drive operational improvements and support sustainable development practices within the industry.Keywords: retail SMEs, enterprise resource planning, operational efficiency, customer centricity
Procedia PDF Downloads 25148 Optimization of Agricultural Water Demand Using a Hybrid Model of Dynamic Programming and Neural Networks: A Case Study of Algeria
Authors: M. Boudjerda, B. Touaibia, M. K. Mihoubi
Abstract:
In Algeria agricultural irrigation is the primary water consuming sector followed by the domestic and industrial sectors. Economic development in the last decade has weighed heavily on water resources which are relatively limited and gradually decreasing to the detriment of agriculture. The research presented in this paper focuses on the optimization of irrigation water demand. Dynamic Programming-Neural Network (DPNN) method is applied to investigate reservoir optimization. The optimal operation rule is formulated to minimize the gap between water release and water irrigation demand. As a case study, Foum El-Gherza dam’s reservoir system in south of Algeria has been selected to examine our proposed optimization model. The application of DPNN method allowed increasing the satisfaction rate (SR) from 12.32% to 55%. In addition, the operation rule generated showed more reliable and resilience operation for the examined case study.Keywords: water management, agricultural demand, dam and reservoir operation, Foum el-Gherza dam, dynamic programming, artificial neural network
Procedia PDF Downloads 1155147 A Tool for Facilitating an Institutional Risk Profile Definition
Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan
Abstract:
This paper presents an approach for the easy creation of an institutional risk profile for endangerment analysis of file formats. The main contribution of this work is the employment of data mining techniques to support risk factors set up with just the most important values that are important for a particular organisation. Subsequently, the risk profile employs fuzzy models and associated configurations for the file format metadata aggregator to support digital preservation experts with a semi-automatic estimation of endangerment level for file formats. Our goal is to make use of a domain expert knowledge base aggregated from a digital preservation survey in order to detect preservation risks for a particular institution. Another contribution is support for visualisation and analysis of risk factors for a requried dimension. The proposed methods improve the visibility of risk factor information and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and automatically aggregated file format metadata from linked open data sources. To facilitate decision-making, the aggregated information about the risk factors is presented as a multidimensional vector. The goal is to visualise particular dimensions of this vector for analysis by an expert. The sample risk profile calculation and the visualisation of some risk factor dimensions is presented in the evaluation section.Keywords: digital information management, file format, endangerment analysis, fuzzy models
Procedia PDF Downloads 4045146 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 665145 Parameter Selection for Computationally Efficient Use of the Bfvrns Fully Homomorphic Encryption Scheme
Authors: Cavidan Yakupoglu, Kurt Rohloff
Abstract:
In this study, we aim to provide a novel parameter selection model for the BFVrns scheme, which is one of the prominent FHE schemes. Parameter selection in lattice-based FHE schemes is a practical challenges for experts or non-experts. Towards a solution to this problem, we introduce a hybrid principles-based approach that combines theoretical with experimental analyses. To begin, we use regression analysis to examine the parameters on the performance and security. The fact that the FHE parameters induce different behaviors on performance, security and Ciphertext Expansion Factor (CEF) that makes the process of parameter selection more challenging. To address this issue, We use a multi-objective optimization algorithm to select the optimum parameter set for performance, CEF and security at the same time. As a result of this optimization, we get an improved parameter set for better performance at a given security level by ensuring correctness and security against lattice attacks by providing at least 128-bit security. Our result enables average ~ 5x smaller CEF and mostly better performance in comparison to the parameter sets given in [1]. This approach can be considered a semiautomated parameter selection. These studies are conducted using the PALISADE homomorphic encryption library, which is a well-known HE library. The abstract goes here.Keywords: lattice cryptography, fully homomorphic encryption, parameter selection, LWE, RLWE
Procedia PDF Downloads 1575144 StockTwits Sentiment Analysis on Stock Price Prediction
Authors: Min Chen, Rubi Gupta
Abstract:
Understanding and predicting stock market movements is a challenging problem. It is believed stock markets are partially driven by public sentiments, which leads to numerous research efforts to predict stock market trend using public sentiments expressed on social media such as Twitter but with limited success. Recently a microblogging website StockTwits is becoming increasingly popular for users to share their discussions and sentiments about stocks and financial market. In this project, we analyze the text content of StockTwits tweets and extract financial sentiment using text featurization and machine learning algorithms. StockTwits tweets are first pre-processed using techniques including stopword removal, special character removal, and case normalization to remove noise. Features are extracted from these preprocessed tweets through text featurization process using bags of words, N-gram models, TF-IDF (term frequency-inverse document frequency), and latent semantic analysis. Machine learning models are then trained to classify the tweets' sentiment as positive (bullish) or negative (bearish). The correlation between the aggregated daily sentiment and daily stock price movement is then investigated using Pearson’s correlation coefficient. Finally, the sentiment information is applied together with time series stock data to predict stock price movement. The experiments on five companies (Apple, Amazon, General Electric, Microsoft, and Target) in a duration of nine months demonstrate the effectiveness of our study in improving the prediction accuracy.Keywords: machine learning, sentiment analysis, stock price prediction, tweet processing
Procedia PDF Downloads 1565143 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning
Authors: Mayowa A. Sofowora, Seraphin D. Eyono Obono
Abstract:
The use of information and communication technologies such as computers, mobile phones and the internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services, and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey, and the second type of objectives is achieved through a survey of high school teachers from the ILembe and Umgungudlovu districts in the KwaZuluNatal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaire. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.Keywords: attribution, cellphones, e-learning, reliability
Procedia PDF Downloads 4025142 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies
Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey
Abstract:
Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.Keywords: climate change, downscaling, GCM, RCM
Procedia PDF Downloads 4065141 Reverse Logistics End of Life Products Acquisition and Sorting
Authors: Badli Shah Mohd Yusoff, Khairur Rijal Jamaludin, Rozetta Dollah
Abstract:
The emerging of reverse logistics and product recovery management is an important concept in reconciling economic and environmental objectives through recapturing values of the end of life product returns. End of life products contains valuable modules, parts, residues and materials that can create value if recovered efficiently. The main objective of this study is to explore and develop a model to recover as much of the economic value as reasonably possible to find the optimality of return acquisition and sorting to meet demand and maximize profits over time. In this study, the benefits that can be obtained for remanufacturer is to develop demand forecasting of used products in the future with uncertainty of returns and quality of products. Formulated based on a generic disassembly tree, the proposed model focused on three reverse logistics activity, namely refurbish, remanufacture and disposal incorporating all plausible means quality levels of the returns. While stricter sorting policy, constitute to the decrease amount of products to be refurbished or remanufactured and increases the level of discarded products. Numerical experiments carried out to investigate the characteristics and behaviour of the proposed model with mathematical programming model using Lingo 16.0 for medium-term planning of return acquisition, disassembly (refurbish or remanufacture) and disposal activities. Moreover, the model seeks an analysis a number of decisions relating to trade off management system to maximize revenue from the collection of use products reverse logistics services through refurbish and remanufacture recovery options. The results showed that full utilization in the sorting process leads the system to obtain less quantity from acquisition with minimal overall cost. Further, sensitivity analysis provides a range of possible scenarios to consider in optimizing the overall cost of refurbished and remanufactured products.Keywords: core acquisition, end of life, reverse logistics, quality uncertainty
Procedia PDF Downloads 3035140 CFD Analysis of the Blood Flow in Left Coronary Bifurcation with Variable Angulation
Authors: Midiya Khademi, Ali Nikoo, Shabnam Rahimnezhad Baghche Jooghi
Abstract:
Cardiovascular diseases (CVDs) are the main cause of death globally. Most CVDs can be prevented by avoiding habitual risk factors. Separate from the habitual risk factors, there are some inherent factors in each individual that can increase the risk potential of CVDs. Vessel shapes and geometry are influential factors, having great impact on the blood flow and the hemodynamic behavior of the vessels. In the present study, the influence of bifurcation angle on blood flow characteristics is studied. In order to approach this topic, by simplifying the details of the bifurcation, three models with angles 30°, 45°, and 60° were created, then by using CFD analysis, the response of these models for stable flow and pulsatile flow was studied. In the conducted simulation in order to eliminate the influence of other geometrical factors, only the angle of the bifurcation was changed and other parameters remained constant during the research. Simulations are conducted under dynamic and stable condition. In the stable flow simulation, a steady velocity of 0.17 m/s at the inlet plug was maintained and in dynamic simulations, a typical LAD flow waveform is implemented. The results show that the bifurcation angle has an influence on the maximum speed of the flow. In the stable flow condition, increasing the angle lead to decrease the maximum flow velocity. In the dynamic flow simulations, increasing the bifurcation angle lead to an increase in the maximum velocity. Since blood flow has pulsatile characteristics, using a uniform velocity during the simulations can lead to a discrepancy between the actual results and the calculated results.Keywords: coronary artery, cardiovascular disease, bifurcation, atherosclerosis, CFD, artery wall shear stress
Procedia PDF Downloads 1645139 Design of a Graphical User Interface for Data Preprocessing and Image Segmentation Process in 2D MRI Images
Authors: Enver Kucukkulahli, Pakize Erdogmus, Kemal Polat
Abstract:
The 2D image segmentation is a significant process in finding a suitable region in medical images such as MRI, PET, CT etc. In this study, we have focused on 2D MRI images for image segmentation process. We have designed a GUI (graphical user interface) written in MATLABTM for 2D MRI images. In this program, there are two different interfaces including data pre-processing and image clustering or segmentation. In the data pre-processing section, there are median filter, average filter, unsharp mask filter, Wiener filter, and custom filter (a filter that is designed by user in MATLAB). As for the image clustering, there are seven different image segmentations for 2D MR images. These image segmentation algorithms are as follows: PSO (particle swarm optimization), GA (genetic algorithm), Lloyds algorithm, k-means, the combination of Lloyds and k-means, mean shift clustering, and finally BBO (Biogeography Based Optimization). To find the suitable cluster number in 2D MRI, we have designed the histogram based cluster estimation method and then applied to these numbers to image segmentation algorithms to cluster an image automatically. Also, we have selected the best hybrid method for each 2D MR images thanks to this GUI software.Keywords: image segmentation, clustering, GUI, 2D MRI
Procedia PDF Downloads 3775138 Solving Nonconvex Economic Load Dispatch Problem Using Particle Swarm Optimization with Time Varying Acceleration Coefficients
Authors: Alireza Alizadeh, Hossein Ghadimi, Oveis Abedinia, Noradin Ghadimi
Abstract:
A Particle Swarm Optimization with Time Varying Acceleration Coefficients (PSO-TVAC) is proposed to determine optimal economic load dispatch (ELD) problem in this paper. The proposed methodology easily takes care of solving non-convex economic load dispatch problems along with different constraints like transmission losses, dynamic operation constraints and prohibited operating zones. The proposed approach has been implemented on the 3-machines 6-bus, IEEE 5-machines 14-bus, IEEE 6-machines 30-bus systems and 13 thermal units power system. The proposed technique is compared to solve the ELD problem with hybrid approach by using the valve-point effect. The comparison results prove the capability of the proposed method giving significant improvements in the generation cost for the economic load dispatch problem.Keywords: PSO-TVAC, economic load dispatch, non-convex cost function, prohibited operating zone, transmission losses
Procedia PDF Downloads 3875137 Semi-Supervised Learning for Spanish Speech Recognition Using Deep Neural Networks
Authors: B. R. Campomanes-Alvarez, P. Quiros, B. Fernandez
Abstract:
Automatic Speech Recognition (ASR) is a machine-based process of decoding and transcribing oral speech. A typical ASR system receives acoustic input from a speaker or an audio file, analyzes it using algorithms, and produces an output in the form of a text. Some speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian Mixture Models (GMMs) to determine how well each state of each HMM fits a short window of frames of coefficients that represents the acoustic input. Another way to evaluate the fit is to use a feed-forward neural network that takes several frames of coefficients as input and produces posterior probabilities over HMM states as output. Deep neural networks (DNNs) that have many hidden layers and are trained using new methods have been shown to outperform GMMs on a variety of speech recognition systems. Acoustic models for state-of-the-art ASR systems are usually training on massive amounts of data. However, audio files with their corresponding transcriptions can be difficult to obtain, especially in the Spanish language. Hence, in the case of these low-resource scenarios, building an ASR model is considered as a complex task due to the lack of labeled data, resulting in an under-trained system. Semi-supervised learning approaches arise as necessary tasks given the high cost of transcribing audio data. The main goal of this proposal is to develop a procedure based on acoustic semi-supervised learning for Spanish ASR systems by using DNNs. This semi-supervised learning approach consists of: (a) Training a seed ASR model with a DNN using a set of audios and their respective transcriptions. A DNN with a one-hidden-layer network was initialized; increasing the number of hidden layers in training, to a five. A refinement, which consisted of the weight matrix plus bias term and a Stochastic Gradient Descent (SGD) training were also performed. The objective function was the cross-entropy criterion. (b) Decoding/testing a set of unlabeled data with the obtained seed model. (c) Selecting a suitable subset of the validated data to retrain the seed model, thereby improving its performance on the target test set. To choose the most precise transcriptions, three confidence scores or metrics, regarding the lattice concept (based on the graph cost, the acoustic cost and a combination of both), was performed as selection technique. The performance of the ASR system will be calculated by means of the Word Error Rate (WER). The test dataset was renewed in order to extract the new transcriptions added to the training dataset. Some experiments were carried out in order to select the best ASR results. A comparison between a GMM-based model without retraining and the DNN proposed system was also made under the same conditions. Results showed that the semi-supervised ASR-model based on DNNs outperformed the GMM-model, in terms of WER, in all tested cases. The best result obtained an improvement of 6% relative WER. Hence, these promising results suggest that the proposed technique could be suitable for building ASR models in low-resource environments.Keywords: automatic speech recognition, deep neural networks, machine learning, semi-supervised learning
Procedia PDF Downloads 3395136 Modelling of Pipe Jacked Twin Tunnels in a Very Soft Clay
Authors: Hojjat Mohammadi, Randall Divito, Gary J. E. Kramer
Abstract:
Tunnelling and pipe jacking in very soft soils (fat clays), even with an Earth Pressure Balance tunnel boring machine (EPBM), can cause large ground displacements. In this study, the short-term and long-term ground and tunnel response is predicted for twin, pipe-jacked EPBM 3 meter diameter tunnels with a narrow pillar width. Initial modelling indicated complete closure of the annulus gap at the tail shield onto the centrifugally cast, glass-fiber-reinforced, polymer mortar jacking pipe (FRP). Numerical modelling was employed to simulate the excavation and support installation sequence, examine the ground response during excavation, confirm the adequacy of the pillar width and check the structural adequacy of the installed pipe. In the numerical models, Mohr-Coulomb constitutive model with the effect of unloading was adopted for the fat clays, while for the bedrock layer, the generalized Hoek-Brown was employed. The numerical models considered explicit excavation sequences and different levels of ground convergence prior to support installation. The well-studied excavation sequences made the analysis possible for this study on a very soft clay, otherwise, obtaining the convergency in the numerical analysis would be impossible. The predicted results indicate that the ground displacements around the tunnel and its effect on the pipe would be acceptable despite predictions of large zones of plastic behaviour around the tunnels and within the entire pillar between them due to excavation-induced ground movements.Keywords: finite element modeling (FEM), pipe-jacked tunneling, very soft clay, EPBM
Procedia PDF Downloads 825135 Interactions between Residential Mobility, Car Ownership and Commute Mode: The Case for Melbourne
Authors: Solmaz Jahed Shiran, John Hearne, Tayebeh Saghapour
Abstract:
Daily travel behavior is strongly influenced by the location of the places of residence, education, and employment. Hence a change in those locations due to a move or changes in an occupation leads to a change in travel behavior. Given the interventions of housing mobility and travel behaviors, the hypothesis is that a mobile housing market allows households to move as a result of any change in their life course, allowing them to be closer to central services, public transport facilities and workplace and hence reducing the time spent by individuals on daily travel. Conversely, household’s immobility may lead to longer commutes of residents, for example, after a change of a job or a need for new services such as schools for children who have reached their school age. This paper aims to investigate the association between residential mobility and travel behavior. The Victorian Integrated Survey of Travel and Activity (VISTA) data is used for the empirical analysis. Car ownership and journey to work time and distance of employed people are used as indicators of travel behavior. Change of usual residence within the last five years used to identify movers and non-movers. Statistical analysis, including regression models, is used to compare the travel behavior of movers and non-movers. The results show travel time, and the distance does not differ for movers and non-movers. However, this is not the case when taking into account the residence tenure-type. In addition, car ownership rate and number found to be significantly higher for non-movers. It is hoped that the results from this study will contribute to a better understanding of factors other than common socioeconomic and built environment features influencing travel behavior.Keywords: journey to work, regression models, residential mobility, commute mode, car ownership
Procedia PDF Downloads 1345134 Architectural Visualization: From Ancient Civilizations to the Roman Empire
Authors: Matthias Stange
Abstract:
Architectural visualization has been practiced for as long as there have been buildings. Visualization (lat.: visibilis "visible") generally refers to bringing abstract data and relationships into a graphically, visually comprehensible form. Particularly, visualization refers to the process of translating relationships that are difficult to formulate linguistically or logically into visual media (e.g., drawings or models) to make them comprehensible. Building owners have always been interested in knowing how their building will look before it is built. In the empirical part of this study, the roots of architectural visualization are examined, starting from the ancient civilizations to the end of the Roman Empire. Extensive literature research on architectural theory and architectural history forms the basis for this analysis. The focus of the analysis is basic research from the emergence of the first two-dimensional drawings in the Neolithic period to the triggers of significant further developments of architectural representation, as well as their importance for subsequent methods and the transmission of knowledge over the following epochs. The analysis focuses on the development of analog methods of representation from the first Neolithic house floor plans to the Greek detailed stone models and paper drawings in the Roman Empire. In particular, the question of socio-cultural, socio-political, and economic changes as possible triggers for the development of representational media and methods will be analyzed. The study has shown that the development of visual building representation has been driven by scientific, technological, and social developments since the emergence of the first civilizations more than 6000 years ago first by the change in human’s subsistence strategy, from food appropriation by hunting and gathering to food production by agriculture and livestock, and the sedentary lifestyle required for this.Keywords: ancient Greece, ancient orient, Roman Empire, architectural visualization
Procedia PDF Downloads 1165133 The Challenges of Scaling Agile to Large-Scale Distributed Development: An Overview of the Agile Factory Model
Authors: Bernard Doherty, Andrew Jelfs, Aveek Dasgupta, Patrick Holden
Abstract:
Many companies have moved to agile and hybrid agile methodologies where portions of the Software Design Life-cycle (SDLC) and Software Test Life-cycle (STLC) can be time boxed in order to enhance delivery speed, quality and to increase flexibility to changes in software requirements. Despite widespread proliferation of agile practices, implementation often fails due to lack of adequate project management support, decreased motivation or fear of increased interaction. Consequently, few organizations effectively adopt agile processes with tailoring often required to integrate agile methodology in large scale environments. This paper provides an overview of the challenges in implementing an innovative large-scale tailored realization of the agile methodology termed the Agile Factory Model (AFM), with the aim of comparing and contrasting issues of specific importance to organizations undertaking large scale agile development. The conclusions demonstrate that agile practices can be effectively translated to a globally distributed development environment.Keywords: agile, agile factory model, globally distributed development, large-scale agile
Procedia PDF Downloads 2945132 Lessons of Passive Environmental Design in the Sarabhai and Shodan Houses by Le Corbusier
Authors: Juan Sebastián Rivera Soriano, Rosa Urbano Gutiérrez
Abstract:
The Shodan House and the Sarabhai House (Ahmedabad, India, 1954 and 1955, respectively) are considered some of the most important works of Le Corbusier produced in the last stage of his career. There are some academic publications that study the compositional and formal aspects of their architectural design, but there is no in-depth investigation into how the climatic conditions of this region were a determining factor in the design decisions implemented in these projects. This paper argues that Le Corbusier developed a specific architectural design strategy for these buildings based on scientific research on climate in the Indian context. This new language was informed by a pioneering study and interpretation of climatic data as a design methodology that would even involve the development of new design tools. This study investigated whether their use of climatic data meets values and levels of accuracy obtained with contemporary instruments and tools, such as Energy Plus weather data files and Climate Consultant. It also intended to find out if Le Corbusier's office’s intentions and decisions were indeed appropriate and efficient for those climate conditions by assessing these projects using BIM models and energy performance simulations from Design Builder. Accurate models were built using original historical data through archival research. The outcome is to provide a new understanding of the environment of these houses through the combination of modern building science and architectural history. The results confirm that in these houses, it was achieved a model of low energy consumption. This paper contributes new evidence not only on exemplary modern architecture concerned with environmental performance but also on how it developed progressive thinking in this direction.Keywords: bioclimatic architecture, Le Corbusier, Shodan, Sarabhai Houses
Procedia PDF Downloads 655131 Optimization of Reinforced Concrete Buildings According to the Algerian Seismic Code
Authors: Nesreddine Djafar Henni, Nassim Djedoui, Rachid Chebili
Abstract:
Recent decades have witnessed significant efforts being made to optimize different types of structures and components. The concept of cost optimization in reinforced concrete structures, which aims at minimizing financial resources while ensuring maximum building safety, comprises multiple materials, and the objective function for their optimal design is derived from the construction cost of the steel as well as concrete that significantly contribute to the overall weight of reinforced concrete (RC) structures. To achieve this objective, this work has been devoted to optimizing the structural design of 3D RC frame buildings which integrates, for the first time, the Algerian regulations. Three different test examples were investigated to assess the efficiency of our work in optimizing RC frame buildings. The hybrid GWOPSO algorithm is used, and 30000 generations are made. The cost of the building is reduced by iteration each time. Concrete and reinforcement bars are used in the building cost. As a result, the cost of a reinforced concrete structure is reduced by 30% compared with the initial design. This result means that the 3D cost-design optimization of the framed structure is successfully achieved.Keywords: optimization, automation, API, Malab, RC structures
Procedia PDF Downloads 495130 The Effectiveness of a Hybrid Diffie-Hellman-RSA-Advanced Encryption Standard Model
Authors: Abdellahi Cheikh
Abstract:
With the emergence of quantum computers with very powerful capabilities, the security of the exchange of shared keys between two interlocutors poses a big problem in terms of the rapid development of technologies such as computing power and computing speed. Therefore, the Diffie-Hellmann (DH) algorithm is more vulnerable than ever. No mechanism guarantees the security of the key exchange, so if an intermediary manages to intercept it, it is easy to intercept. In this regard, several studies have been conducted to improve the security of key exchange between two interlocutors, which has led to interesting results. The modification made on our model Diffie-Hellman-RSA-AES (DRA), which encrypts the information exchanged between two users using the three-encryption algorithms DH, RSA and AES, by using stenographic photos to hide the contents of the p, g and ClesAES values that are sent in an unencrypted state at the level of DRA model to calculate each user's public key. This work includes a comparative study between the DRA model and all existing solutions, as well as the modification made to this model, with an emphasis on the aspect of reliability in terms of security. This study presents a simulation to demonstrate the effectiveness of the modification made to the DRA model. The obtained results show that our model has a security advantage over the existing solution, so we made these changes to reinforce the security of the DRA model.Keywords: Diffie-Hellmann, DRA, RSA, advanced encryption standard
Procedia PDF Downloads 935129 Research on Evaluation of Renewable Energy Technology Innovation Strategy Based on PMC Index Model
Abstract:
Renewable energy technology innovation is an important way to realize the energy transformation. Our government has issued a series of policies to guide and support the development of renewable energy. The implementation of these policies will affect the further development, utilization and technological innovation of renewable energy. In this context, it is of great significance to systematically sort out and evaluate the renewable energy technology innovation policy for improving the existing policy system. Taking the 190 renewable energy technology innovation policies issued during 2005-2021 as a sample, from the perspectives of policy issuing departments and policy keywords, it uses text mining and content analysis methods to analyze the current situation of the policies and conduct a semantic network analysis to identify the core issuing departments and core policy topic words; A PMC (Policy Modeling Consistency) index model is built to quantitatively evaluate the selected policies, analyze the overall pros and cons of the policy through its PMC index, and reflect the PMC value of the model's secondary index The core departments publish policies and the performance of each dimension of the policies related to the core topic headings. The research results show that Renewable energy technology innovation policies focus on synergy between multiple departments, while the distribution of the issuers is uneven in terms of promulgation time; policies related to different topics have their own emphasis in terms of policy types, fields, functions, and support measures, but It still needs to be improved, such as the lack of policy forecasting and supervision functions, the lack of attention to product promotion, and the relatively single support measures. Finally, this research puts forward policy optimization suggestions in terms of promoting joint policy release, strengthening policy coherence and timeliness, enhancing the comprehensiveness of policy functions, and enriching incentive measures for renewable energy technology innovation.Keywords: renewable energy technology innovation, content analysis, policy evaluation, PMC index model
Procedia PDF Downloads 655128 Dissolution Kinetics of Chevreul’s Salt in Ammonium Cloride Solutions
Authors: Mustafa Sertçelik, Turan Çalban, Hacali Necefoğlu, Sabri Çolak
Abstract:
In this study, Chevreul’s salt solubility and its dissolution kinetics in ammonium chloride solutions were investigated. Chevreul’s salt that we used in the studies was obtained by using the optimum conditions (ammonium sulphide concentration; 0,4 M, copper sulphate concentration; 0,25 M, temperature; 60°C, stirring speed; 600 rev/min, pH; 4 and reaction time; 15 mins) determined by T. Çalban et al. Chevreul’s salt solubility in ammonium chloride solutions and the kinetics of dissolution were investigated. The selected parameters that affect solubility were reaction temperature, concentration of ammonium chloride, stirring speed, and solid/liquid ratio. Correlation of experimental results had been achieved using linear regression implemented in the statistical package program statistica. The effect of parameters on Chevreul’s salt solubility was examined and integrated rate expression of dissolution rate was found using kinetic models in solid-liquid heterogeneous reactions. The results revealed that the dissolution rate of Chevreul’s salt was decreasing while temperature, concentration of ammonium chloride and stirring speed were increasing. On the other hand, dissolution rate was found to be decreasing with the increase of solid/liquid ratio. Based on result of the applications of the obtained experimental results to the kinetic models, we can deduce that Chevreul’s salt dissolution rate is controlled by diffusion through the ash (or product layer). Activation energy of the reaction of dissolution was found as 74.83 kJ/mol. The integrated rate expression along with the effects of parameters on Chevreul's salt solubility was found to be as follows: 1-3(1-X)2/3+2(1-X)= [2,96.1013.(CA)3,08 .(S/L)-038.(W)1,23 e-9001,2/T].tKeywords: Chevreul's salt, copper, ammonium chloride, ammonium sulphide, dissolution kinetics
Procedia PDF Downloads 308