Search results for: two dimensional model
3734 Contribution to the Study and Optimal Exploitation of a Solar Power System for a Semi-Arid Zone (Case Study: Ferkene, Algeria)
Authors: D. Dib, W. Guebabi, M. B. Guesmi
Abstract:
The objective of this paper is a contribution to a study of power supply by solar energy system called a common Ferkène north of Algerian desert in the semi-arid area. The optimal exploitation of the system, goes through stages of study and essential design, the choice of the model of the photovoltaic panel, the study of behavior with all the parameters involved in simulation before fixing the trajectory tracking the maximum point the power to extract (MPPT), form the essential platform to shape the design of the solar system set up to supply the town Ferkène without considering the grid. The identification of the common Ferkène by the collection of geographical, meteorological, demographic and electrical provides a basis uniform and important data. The results reflect a valid fictive model for any attempt to study and design a solar system to supply an arid or semi-arid zone by electrical energy from photovoltaic panels.
Keywords: Solar power, photovoltaic panel, Boost converter, supply, design, electric power, Ferkène, Algeria.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17523733 Parametric Characterization of Load Capacity of Infinitely Wide Parabolic Slider Bearing with Couple Stress Fluids
Authors: Oladeinde Mobolaji Humphrey, Akpobi John
Abstract:
A mathematical model for the hydrodynamic lubrication of parabolic slider bearings with couple stress lubricants is presented. A numerical solution for the mathematical model using finite element scheme is obtained using three nodes isoparametric quadratic elements. Stiffness integrals obtained from the weak form of the governing equations were solved using Gauss Quadrature to obtain a finite number of stiffness matrices. The global system of equations was obtained for the bearing and solved using Gauss Seidel iterative scheme. The converged pressure solution was used to obtain the load capacity of the bearing. Parametric studies were carried out and it was shown that the effect of couple stresses and profile parameter are to increase the load carrying capacity of the parabolic slider bearing. Numerical experiments reveal that the magnitude of the profile parameter at which maximum load is obtained increases with decrease in couple stress parameter. The results are presented in graphical form.Keywords: Finite element, numerical, parabolic slider.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20843732 Research on Simulation Model of Collision Force between Floating Ice and Pier
Authors: Tianlai Yu, Zhengguo Yuan, Sidi Shan
Abstract:
Adopting the measured constitutive relationship of stress-strain of river ice, the finite element analysis model of percussive force of river ice and pier is established, by the explicit dynamical analysis software package LS-DYNA. Effects of element types, contact method and arithmetic of ice and pier, coupled modes between different elements, mesh density of pier, and ice sheet in contact area on the collision force are studied. Some of measures for the collision force analysis of river ice and pier are proposed as follows: bridge girder can adopt beam161 element with 3-node; pier below the line of 1.30m above ice surface and ice sheet use solid164 element with 8-node; in order to accomplish the connection of different elements, the rigid body with 0.01-0.05m thickness is defined between solid164 and beam161; the contact type of ice and pier adopts AUTOMATIC_SURFACE_TO_SURFACE, using symmetrical penalty function algorithms; meshing size of pier below the line of 1.30m above ice surface should not less than 0.25×0.25×0.5m3. The simulation results have the advantage of high precision by making a comparison between measured and computed data. The research results can be referred for collision force study between river ice and pier.Keywords: River ice, collision force, simulation analysis, ANSYS/LS-DYNA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20463731 Study on Cross-flow Heat Transfer in Fixed Bed
Authors: Hong-fang Ma, Hai-tao Zhang, Wei-yong Ying, Ding-ye Fang
Abstract:
Radial flow reactor was focused for large scale methanol synthesis and in which the heat transfer type was cross-flow. The effects of operating conditions including the reactor inlet air temperature, the heating pipe temperature and the air flow rate on the cross-flow heat transfer was investigated and the results showed that the temperature profile of the area in front of the heating pipe was slightly affected by all the operating conditions. The main area whose temperature profile was influenced was the area behind the heating pipe. The heat transfer direction according to the air flow directions. In order to provide the basis for radial flow reactor design calculation, the dimensionless number group method was used for data fitting of the bed effective thermal conductivity and the wall heat transfer coefficient which was calculated by the mathematical model with the product of Reynolds number and Prandtl number. The comparison of experimental data and calculated value showed that the calculated value fit the experimental data very well and the formulas could be used for reactor designing calculation.Keywords: Cross-flow, Heat transfer, Fixed bed, Mathematical model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18743730 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions
Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier
Abstract:
Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.
Keywords: Ice slurry, propylene-glycol, ethylene-glycol, rheology, artificial neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11273729 Improving Decision Support for Organ Transplant
Authors: I. McCulloh, A. Placona, D. Stewart, D. Gause, K. Kiernan, M. Stuart, C. Zinner, L. Cartwright
Abstract:
We find in our data that an alarming number of viable deceased donor kidneys are discarded every year in the US, while waitlisted candidates are dying every day. We observe as many as 85% of transplanted organs are refused at least once for a patient that scored higher on the match list. There are hundreds of clinical variables involved in making a clinical transplant decision and there is rarely an ideal match. Decision makers exhibit an optimism bias where they may refuse an organ offer assuming a better match is imminent. We propose a semi-parametric Cox proportional hazard model, augmented by an accelerated failure time model based on patient-specific suitable organ supply and demand to estimate a time-to-next-offer. Performance is assessed with Cox-Snell residuals and decision curve analysis, demonstrating improved decision support for up to a 5-year outlook. Providing clinical decision-makers with quantitative evidence of likely patient outcomes (e.g., time to next offer and the mortality associated with waiting) may improve decisions and reduce optimism bias, thus reducing discarded organs and matching more patients on the waitlist.
Keywords: Decision science, KDPI, optimism bias, organ transplant.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813728 Design and Operation of a Multicarrier Energy System Based On Multi Objective Optimization Approach
Authors: Azadeh Maroufmashat, Sourena Sattari Khavas, Halle Bakhteeyar
Abstract:
Multi-energy systems will enhance the system reliability and power quality. This paper presents an integrated approach for the design and operation of distributed energy resources (DER) systems, based on energy hub modeling. A multi-objective optimization model is developed by considering an integrated view of electricity and natural gas network to analyze the optimal design and operating condition of DER systems, by considering two conflicting objectives, namely, minimization of total cost and the minimization of environmental impact which is assessed in terms of CO2 emissions. The mathematical model considers energy demands of the site, local climate data, and utility tariff structure, as well as technical and financial characteristics of the candidate DER technologies. To provide energy demands, energy systems including photovoltaic, and co-generation systems, boiler, central power grid are considered. As an illustrative example, a hotel in Iran demonstrates potential applications of the proposed method. The results prove that increasing the satisfaction degree of environmental objective leads to increased total cost.
Keywords: Multi objective optimization, DER systems, Energy hub, Cost, CO2 emission.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24663727 Design and Analysis of Fault Tolerate feature of n-Phase Induction Motor Drive
Authors: G. Renuka Devi
Abstract:
This paper presents design and analysis of fault tolerate feature of n-phase induction motor drive. The n-phase induction motor (more than 3-phases) has a number of advantages over conventional 3-phase induction motor, it has low torque pulsation with increased torque density, more fault tolerant feature, low current ripple with increased efficiency. When increasing the number of phases, it has reduced current per phase without increasing per phase voltage, resulting in an increase in the total power rating of n-phase motors in the same volume machine. In this paper, the theory of operation of a multi-phase induction motor is discussed. The detailed study of d-q modeling of n-phase induction motors is elaborated. The d-q model of n-phase (5, 6, 7, 9 and 12) induction motors is developed in a MATLAB/Simulink environment. The steady state and dynamic performance of the multi-phase induction motor is studied under varying load conditions. Comparison of 5-phase induction is presented under normal and fault conditions.
Keywords: d-q model, dynamic Response, fault tolerant feature, matlab/simulink, multi-phase induction motor, transient response.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5783726 Information Transmission between Large and Small Stocks in the Korean Stock Market
Authors: Sang Hoon Kang, Seong-Min Yoon
Abstract:
Little attention has been paid to information transmission between the portfolios of large stocks and small stocks in the Korean stock market. This study investigates the return and volatility transmission mechanisms between large and small stocks in the Korea Exchange (KRX). This study also explores whether bad news in the large stock market leads to a volatility of the small stock market that is larger than the good news volatility of the large stock market. By employing the Granger causality test, we found unidirectional return transmissions from the large stocks to medium and small stocks. This evidence indicates that pat information about the large stocks has a better ability to predict the returns of the medium and small stocks in the Korean stock market. Moreover, by using the asymmetric GARCH-BEKK model, we observed the unidirectional relationship of asymmetric volatility transmission from large stocks to the medium and small stocks. This finding suggests that volatility in the medium and small stocks following a negative shock in the large stocks is larger than that following a positive shock in the large stocks.Keywords: Asymmetric GARCH-BEKK model, Asymmetric volatility transmission, Causality, Korean stock market, Spillover effect
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16743725 The Relationship between Interpersonal Relationship and the Subjective Well-Being of Chinese Primary and Secondary Teachers: A Mediated Moderation Model
Authors: Xuling Zhang, Yong Wang, Xingyun Liu, Shuangxue Xu
Abstract:
Based on positive psychology, this study presented a mediated moderation model in which character strengths moderated the relationship between interpersonal relationship, job satisfaction and subjective well-being, with job satisfaction taking the mediation role among them. A total of 912 teachers participated in four surveys, which include the Oxford Happiness Questionnaire, Values in Action Inventory of Strengths, job satisfaction questionnaire, and the interpersonal relationship questionnaire. The results indicated that: (1) Taking interpersonal relationship as a typical work environmental variable, the result shows that it is significantly correlated to subjective well-being. (2) The character strengths of "kindness", “authenticity” moderated the effect of the teachers’ interpersonal relationship on subjective well-being. (3) The teachers’ job satisfaction mediated the above mentioned moderation effects. In general, this study shows that the teachers’ interpersonal relationship affects their subjective well-being, with their job satisfaction as mediation and character strengths of “kindness” and “authenticity” as moderation. The managerial implications were also discussed.
Keywords: Character strength, subjective well-being, job satisfaction, interpersonal relationship.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21353724 Organizational Management Model based on Knowledge Management, Talent Management and Technology Management Framework “Gomak“
Authors: Nieto Bernal W., Luna Amaya C.
Abstract:
This paper aims to present a framework for the organizational knowledge management, which seeks to deploy a standardized structure for the integrated management of knowledge is a common language based on domains, processes and global indicators inspired by the COBIT framework 5 (ISACA, 2012), which supports the integration of three technologies, enterprise information architecture (EIA), the business process modeling (BPM) and service-oriented architecture (SOA). The Gomak Framework is a management platform that seeks to integrate the information technology infrastructure, the structure of applications, information infrastructure, and business logic and business model to support a sound strategy of organizational knowledge management, low process-based approach and concurrent engineering. Concurrent engineering (CE) is a systematic approach to integrated product development that respond to customer expectations, involving all perspectives in parallel, from the beginning of the product life cycle. (European Space Agency, 2000).Keywords: Business Process Modeling, Enterprise Information Architecture, Government and Knowledge Management, Service Oriented Architecture, Process Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18483723 A Unified Approach for Naval Telecommunication Architectures
Authors: Y. Lacroix, J.-F. Malbranque
Abstract:
We present a chronological evolution for naval telecommunication networks. We distinguish periods: with or without multiplexers, with switch systems, with federative systems, with medium switching, and with medium switching with wireless networks. This highlights the introduction of new layers and technology in the architecture. These architectures are presented using layer models of transmission, in a unified way, which enables us to integrate pre-existing models. A ship of a naval fleet has internal communications (i.e. applications' networks of the edge) and external communications (i.e. the use of the means of transmission between edges). We propose architectures, deduced from the layer model, which are the point of convergence between the networks on board and the HF, UHF radio, and satellite resources. This modelling allows to consider end-to-end naval communications, and in a more global way, that is from the user on board towards the user on shore, including transmission and networks on the shore side. The new architectures need take care of quality of services for end-to-end communications, the more remote control develops a lot and will do so in the future. Naval telecommunications will be more and more complex and will use more and more advanced technologies, it will thus be necessary to establish clear global communication schemes to grant consistency of the architectures. Our latest model has been implemented in a military naval situation, and serves as the basic architecture for the RIFAN2 network.
Keywords: Equilibrium beach profile, eastern tombolo of Giens, potential function, erosion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8673722 A New Approach to Face Recognition Using Dual Dimension Reduction
Authors: M. Almas Anjum, M. Younus Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16863721 Using Blockchain Technology to Extend the Vendor Managed Inventory for Sustainability
Authors: Elham Ahmadi, Roshaali Khaturia, Pardis Sahraei, Mohammad Niyayesh, Omid Fatahi Valilai
Abstract:
Nowadays, Information Technology (IT) is changing the way traditional enterprise management concepts work. One of the most dominant IT achievements is the Blockchain Technology. This technology enables the distributed collaboration of stakeholders for their interactions while fulfilling the security and consensus rules among them. This paper has focused on the application of Blockchain technology to enhance one of traditional inventory management models. The Vendor Managed Inventory (VMI) has been considered one of the most efficient mechanisms for vendor inventory planning by the suppliers. While VMI has brought competitive advantages for many industries, however its centralized mechanism limits the collaboration of a pool of suppliers and vendors simultaneously. This paper has studied the recent research for VMI application in industries and also has investigated the applications of Blockchain technology for decentralized collaboration of stakeholders. Focusing on sustainability issue for total supply chain consisting suppliers and vendors, it has proposed a Blockchain based VMI conceptual model. The different capabilities of this model for enabling the collaboration of stakeholders while maintaining the competitive advantages and sustainability issues have been discussed.Keywords: Vendor Managed Inventory, Blockchain Technology, supply chain planning, sustainability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8033720 An Investigation into Ozone Concentration at Urban and Rural Monitoring Stations in Malaysia
Authors: Negar Banan, Mohd Talib Latif
Abstract:
This study investigated the relationship between urban and rural ozone concentrations and quantified the extent to which ambient rural conditions and the concentrations of other pollutants can be used to predict urban ozone concentrations. The study describes the variations of ozone in weekday and weekends as well as the daily maximum recorded at selected monitoring stations. The results showed that Putrajaya station had the highest concentrations of O3 on weekend due the titration of NO during the weekday. Additionally, Jerantut had the lowest average concentration with a reading value high on Wednesdays. The comparisons of average and maximum concentrations of ozone for the three stations showed that the strongest significant correlation is recorded in Jerantut station with the value R2= 0.769. Ozone concentrations originating from a neighbouring urban site form a better predictor to the urban ozone concentrations than widespread rural ozone at some levels of temporal averaging. It is found that in urban and rural of Malaysian peninsular, the concentration of ozone depends on the concentration of NOx and seasonal meteorological factors. The HYSPLIT Model (the northeast monsoon) showed that the wind direction can also influence the concentration of ozone in the atmosphere in the studied areas.Keywords: Ozone, Hysplit model, Weekend effect, Daily Average and Daily maximum, Malaysia
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21333719 Experiment and Simulation of Laser Effect on Thermal Field of Porcine Liver
Authors: K.Ting, K. T. Chen, Y. L. Su, C. J. Chang
Abstract:
In medical therapy, laser has been widely used to conduct cosmetic, tumor and other treatments. During the process of laser irradiation, there may be thermal damage caused by excessive laser exposure. Thus, the establishment of a complete thermal analysis model is clinically helpful to physicians in reference data. In this study, porcine liver in place of tissue was subjected to laser irradiation to set up the experimental data considering the explored impact on surface thermal field and thermal damage region under different conditions of power, laser irradiation time, and distance between laser and porcine liver. In the experimental process, the surface temperature distribution of the porcine lever was measured by the infrared thermal imager. In the part of simulation, the bio heat transfer Pennes-s equation was solved by software SYSWELD applying in welding process. The double ellipsoid function as a laser source term is firstly considered in the prediction for surface thermal field and internal tissue damage. The simulation results are compared with the experimental data to validate the mathematical model established here in.
Keywords: laser infrared thermal imager, bio-heat transfer, double ellipsoid function.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20583718 Influence of a High-Resolution Land Cover Classification on Air Quality Modelling
Authors: C. Silveira, A. Ascenso, J. Ferreira, A. I. Miranda, P. Tuccella, G. Curci
Abstract:
Poor air quality is one of the main environmental causes of premature deaths worldwide, and mainly in cities, where the majority of the population lives. It is a consequence of successive land cover (LC) and use changes, as a result of the intensification of human activities. Knowing these landscape modifications in a comprehensive spatiotemporal dimension is, therefore, essential for understanding variations in air pollutant concentrations. In this sense, the use of air quality models is very useful to simulate the physical and chemical processes that affect the dispersion and reaction of chemical species into the atmosphere. However, the modelling performance should always be evaluated since the resolution of the input datasets largely dictates the reliability of the air quality outcomes. Among these data, the updated LC is an important parameter to be considered in atmospheric models, since it takes into account the Earth’s surface changes due to natural and anthropic actions, and regulates the exchanges of fluxes (emissions, heat, moisture, etc.) between the soil and the air. This work aims to evaluate the performance of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem), when different LC classifications are used as an input. The influence of two LC classifications was tested: i) the 24-classes USGS (United States Geological Survey) LC database included by default in the model, and the ii) CLC (Corine Land Cover) and specific high-resolution LC data for Portugal, reclassified according to the new USGS nomenclature (33-classes). Two distinct WRF-Chem simulations were carried out to assess the influence of the LC on air quality over Europe and Portugal, as a case study, for the year 2015, using the nesting technique over three simulation domains (25 km2, 5 km2 and 1 km2 horizontal resolution). Based on the 33-classes LC approach, particular emphasis was attributed to Portugal, given the detail and higher LC spatial resolution (100 m x 100 m) than the CLC data (5000 m x 5000 m). As regards to the air quality, only the LC impacts on tropospheric ozone concentrations were evaluated, because ozone pollution episodes typically occur in Portugal, in particular during the spring/summer, and there are few research works relating to this pollutant with LC changes. The WRF-Chem results were validated by season and station typology using background measurements from the Portuguese air quality monitoring network. As expected, a better model performance was achieved in rural stations: moderate correlation (0.4 – 0.7), BIAS (10 – 21µg.m-3) and RMSE (20 – 30 µg.m-3), and where higher average ozone concentrations were estimated. Comparing both simulations, small differences grounded on the Leaf Area Index and air temperature values were found, although the high-resolution LC approach shows a slight enhancement in the model evaluation. This highlights the role of the LC on the exchange of atmospheric fluxes, and stresses the need to consider a high-resolution LC characterization combined with other detailed model inputs, such as the emission inventory, to improve air quality assessment.Keywords: Land cover, tropospheric ozone, WRF-Chem, air quality assessment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7963717 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018
Authors: M. Sitoe, O. Zacarias
Abstract:
University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.
Keywords: Evasion and retention, cross validation, bagging, stacking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1203716 Fast Adjustable Threshold for Uniform Neural Network Quantization
Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev
Abstract:
The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.Keywords: Distillation, machine learning, neural networks, quantization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7323715 Flow Analysis of Viscous Nanofluid Due to Rotating Rigid Disk with Navier’s Slip: A Numerical Study
Authors: Khalil Ur Rehman, M. Y. Malik, Usman Ali
Abstract:
In this paper, the problem proposed by Von Karman is treated in the attendance of additional flow field effects when the liquid is spaced above the rotating rigid disk. To be more specific, a purely viscous fluid flow yield by rotating rigid disk with Navier’s condition is considered in both magnetohydrodynamic and hydrodynamic frames. The rotating flow regime is manifested with heat source/sink and chemically reactive species. Moreover, the features of thermophoresis and Brownian motion are reported by considering nanofluid model. The flow field formulation is obtained mathematically in terms of high order differential equations. The reduced system of equations is solved numerically through self-coded computational algorithm. The pertinent outcomes are discussed systematically and provided through graphical and tabular practices. A simultaneous way of study makes this attempt attractive in this sense that the article contains dual framework and validation of results with existing work confirms the execution of self-coded algorithm for fluid flow regime over a rotating rigid disk.
Keywords: Nanoparticles, Newtonian fluid model, chemical reaction, heat source/sink.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9883714 Effect of Bentonite on the Rheological Behavior of Cement Grout in Presence of Superplasticizer
Authors: K. Benyounes, A. Benmounah
Abstract:
Cement-based grouts has been used successfully to repair cracks in many concrete structures such as bridges, tunnels, buildings and to consolidate soils or rock foundations. In the present study the rheological characterization of cement grout with water/binder ratio (W/B) is fixed at 0.5. The effect of the replacement of cement by bentonite (2 to 10% wt) in presence of superplasticizer (0.5% wt) was investigated. Several rheological tests were carried out by using controlled-stress rheometer equipped with vane geometry in temperature of 20°C. To highlight the influence of bentonite and superplasticizer on the rheological behavior of grout cement, various flow tests in a range of shear rate from 0 to 200 s-1 were observed. Cement grout showed a non-Newtonian viscosity behavior at all concentrations of bentonite. Three parameter model Herschel- Bulkley was chosen for fitting of experimental data. Based on the values of correlation coefficients of the estimated parameters, The Herschel-Bulkley law model well described the rheological behavior of the grouts. Test results showed that the dosage of bentonite increases the viscosity and yield stress of the system and introduces more thixotropy. While the addition of both bentonite and superplasticizer with cement grout improve significantly the fluidity and reduced the yield stress due to the action of dispersion of SP.
Keywords: Cement grout, bentonite, superplasticizer, viscosity, yield stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35663713 Evaluating the Effect of Domestic Price on Rice Production in an African Setting: A Typical Evidence of the Sierra Leone Case
Authors: Alhaji M. H. Conteh, Xiangbin Yan, Alfred V Gborie
Abstract:
Rice, which is the staple food in Sierra Leone, is consumed on a daily basis. It is the most imperative food crop extensively grown by farmers across all ecologies in the country. Though much attention is now given to rice grain production through the small holder commercialization programme (SHCP), however, no attention has been given in investigating the limitations faced by rice producers. This paper will contribute to attempts to overcome the development challenges caused by food insecurity. The objective of this paper is thus, to analysis the relationship between rice production and the domestic retail price of rice. The study employed a log linear model in which, the quantity of rice produced is the dependent variable, quantity of rice imported, price of imported rice and price of domestic rice as explanatory variables. Findings showed that, locally produced rice is even more expensive than the imported rice per ton, and almost all the inhabitants in the capital city which hosts about 65% of the entire population of the country favor imported rice, as it is free from stones with other impurities. On the other hand, to control price and simultaneously increase rice production, the government should purchase the rice from the farmers and then sell to private retailers.
Keywords: Domestic price of rice, Econometric model, Rice production, Sierra Leone.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24803712 Game-Tree Simplification by Pattern Matching and Its Acceleration Approach using an FPGA
Authors: Suguru Ochiai, Toru Yabuki, Yoshiki Yamaguchi, Yuetsu Kodama
Abstract:
In this paper, we propose a Connect6 solver which adopts a hybrid approach based on a tree-search algorithm and image processing techniques. The solver must deal with the complicated computation and provide high performance in order to make real-time decisions. The proposed approach enables the solver to be implemented on a single Spartan-6 XC6SLX45 FPGA produced by XILINX without using any external devices. The compact implementation is achieved through image processing techniques to optimize a tree-search algorithm of the Connect6 game. The tree search is widely used in computer games and the optimal search brings the best move in every turn of a computer game. Thus, many tree-search algorithms such as Minimax algorithm and artificial intelligence approaches have been widely proposed in this field. However, there is one fundamental problem in this area; the computation time increases rapidly in response to the growth of the game tree. It means the larger the game tree is, the bigger the circuit size is because of their highly parallel computation characteristics. Here, this paper aims to reduce the size of a Connect6 game tree using image processing techniques and its position symmetric property. The proposed solver is composed of four computational modules: a two-dimensional checkmate strategy checker, a template matching module, a skilful-line predictor, and a next-move selector. These modules work well together in selecting next moves from some candidates and the total amount of their circuits is small. The details of the hardware design for an FPGA implementation are described and the performance of this design is also shown in this paper.Keywords: Connect6, pattern matching, game-tree reduction, hardware direct computation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19743711 Comparison between the Efficiency of Heterojunction Thin Film InGaP\GaAs\Ge and InGaP\GaAs Solar Cell
Authors: F. Djaafar, B. Hadri, G. Bachir
Abstract:
This paper presents the design parameters for a thin film 3J InGaP/GaAs/Ge solar cell with a simulated maximum efficiency of 32.11% using Tcad Silvaco. Design parameters include the doping concentration, molar fraction, layers’ thickness and tunnel junction characteristics. An initial dual junction InGaP/GaAs model of a previous published heterojunction cell was simulated in Tcad Silvaco to accurately predict solar cell performance. To improve the solar cell’s performance, we have fixed meshing, material properties, models and numerical methods. However, thickness and layer doping concentration were taken as variables. We, first simulate the InGaP\GaAs dual junction cell by changing the doping concentrations and thicknesses which showed an increase in efficiency. Next, a triple junction InGaP/GaAs/Ge cell was modeled by adding a Ge layer to the previous dual junction InGaP/GaAs model with an InGaP /GaAs tunnel junction.
Keywords: Heterojunction, modeling, simulation, thin film, Tcad Silvaco.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12453710 A New Framework and a Model for Product Development with an Application in the Telecommunications Services Sector
Authors: Ghada A. El Khayat
Abstract:
This paper argues that a product development exercise involves in addition to the conventional stages, several decisions regarding other aspects. These aspects should be addressed simultaneously in order to develop a product that responds to the customer needs and that helps realize objectives of the stakeholders in terms of profitability, market share and the like. We present a framework that encompasses these different development dimensions. The framework shows that a product development methodology such as the Quality Function Deployment (QFD) is the basic tool which allows definition of the target specifications of a new product. Creativity is the first dimension that enables the development exercise to live and end successfully. A number of group processes need to be followed by the development team in order to ensure enough creativity and innovation. Secondly, packaging is considered to be an important extension of the product. Branding strategies, quality and standardization requirements, identification technologies, design technologies, production technologies and costing and pricing are also integral parts to the development exercise. These dimensions constitute the proposed framework. The paper also presents a mathematical model used to calculate the design targets based on the target costing principle. The framework is used to study a case of a new product development in the telecommunications services sector.Keywords: Product Development Framework, Quality FunctionDeployment, Mathematical Models, Telecommunications.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15593709 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., entropy, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one-class classification (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, principal component analysis (PCA), kernel principal component analysis (KPCA), and autoassociative neural network (ANN) are presented and their performance are compared. It is also shown that, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 95%.
Keywords: Anomaly detection, dimensionality reduction, frequencies selection, modal analysis, neural network, structural health monitoring, vibration measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7083708 Finite Element Prediction on the Machining Stability of Milling Machine with Experimental Verification
Authors: Jui P. Hung, Yuan L. Lai, Hui T. You
Abstract:
Chatter vibration has been a troublesome problem for a machine tool toward the high precision and high speed machining. Essentially, the machining performance is determined by the dynamic characteristics of the machine tool structure and dynamics of cutting process, which can further be identified in terms of the stability lobe diagram. Therefore, realization on the machine tool dynamic behavior can help to enhance the cutting stability. To assess the dynamic characteristics and machining stability of a vertical milling system under the influence of a linear guide, this study developed a finite element model integrated the modeling of linear components with the implementation of contact stiffness at the rolling interface. Both the finite element simulations and experimental measurements reveal that the linear guide with different preload greatly affects the vibration behavior and milling stability of the vertical column spindle head system, which also clearly indicate that the predictions of the machining stability agree well with the cutting tests. It is believed that the proposed model can be successfully applied to evaluate the dynamics performance of machine tool systems of various configurations.Keywords: Machining stability, Vertical milling machine, Linearguide, Contact stiffness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26443707 Determinants of Brand Equity: Offering a Model to Chocolate Industry
Authors: Emari Hossien
Abstract:
This study examined the underlying dimensions of brand equity in the chocolate industry. For this purpose, researchers developed a model to identify which factors are influential in building brand equity. The second purpose was to assess brand loyalty and brand images mediating effect between brand attitude, brand personality, brand association with brand equity. The study employed structural equation modeling to investigate the causal relationships between the dimensions of brand equity and brand equity itself. It specifically measured the way in which consumers’ perceptions of the dimensions of brand equity affected the overall brand equity evaluations. Data were collected from a sample of consumers of chocolate industry in Iran. The results of this empirical study indicate that brand loyalty and brand image are important components of brand equity in this industry. Moreover, the role of brand loyalty and brand image as mediating factors in the intention of brand equity are supported. The principal contribution of the present research is that it provides empirical evidence of the multidimensionality of consumer based brand equity, supporting Aaker´s and Keller´s conceptualization of brand equity. The present research also enriched brand equity building by incorporating the brand personality and brand image, as recommended by previous researchers. Moreover, creating the brand equity index in chocolate industry of Iran particularly is novel.Keywords: brand equity, brand personality, structural equationmodeling, Iran.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 36143706 Development of Coronal Field and Solar Wind Components for MHD Interplanetary Simulations
Authors: Ljubomir Nikolic, Larisa Trichtchenko
Abstract:
The connection between solar activity and adverse phenomena in the Earth’s environment that can affect space and ground based technologies has spurred interest in Space Weather (SW) research. A great effort has been put on the development of suitable models that can provide advanced forecast of SW events. With the progress in computational technology, it is becoming possible to develop operational large scale physics based models which can incorporate the most important physical processes and domains of the Sun-Earth system. In order to enhance our SW prediction capabilities we are developing advanced numerical tools. With operational requirements in mind, our goal is to develop a modular simulation framework of propagation of the disturbances from the Sun through interplanetary space to the Earth. Here, we report and discuss on the development of coronal field and solar wind components for a large scale MHD code. The model for these components is based on a potential field source surface model and an empirical Wang-Sheeley-Arge solar wind relation.
Keywords: Space weather, numerical modeling, coronal field, solar wind.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21393705 FEM Simulation of HE Blast-Fragmentation Warhead and the Calculation of Lethal Range
Authors: G. Tanapornraweekit, W. Kulsirikasem
Abstract:
This paper presents the simulation of fragmentation warhead using a hydrocode, Autodyn. The goal of this research is to determine the lethal range of such a warhead. This study investigates the lethal range of warheads with and without steel balls as preformed fragments. The results from the FE simulation, i.e. initial velocities and ejected spray angles of fragments, are further processed using an analytical approach so as to determine a fragment hit density and probability of kill of a modelled warhead. In order to simulate a plenty of preformed fragments inside a warhead, the model requires expensive computation resources. Therefore, this study attempts to model the problem in an alternative approach by considering an equivalent mass of preformed fragments to the mass of warhead casing. This approach yields approximately 7% and 20% difference of fragment velocities from the analytical results for one and two layers of preformed fragments, respectively. The lethal ranges of the simulated warheads are 42.6 m and 56.5 m for warheads with one and two layers of preformed fragments, respectively, compared to 13.85 m for a warhead without preformed fragment. These lethal ranges are based on the requirement of fragment hit density. The lethal ranges which are based on the probability of kill are 27.5 m, 61 m and 70 m for warheads with no preformed fragment, one and two layers of preformed fragments, respectively.Keywords: Lethal Range, Natural Fragment, Preformed Fragment, Warhead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4311