Search results for: mathematical optimization
1125 Design and Control of a Knee Rehabilitation Device Using an MR-Fluid Brake
Authors: Mina Beheshti, Vida Shams, Mojtaba Esfandiari, Farzaneh Abdollahi, Abdolreza Ohadi
Abstract:
Most of the people who survive a stroke need rehabilitation tools to regain their mobility. The core function of these devices is a brake actuator. The goal of this study is to design and control a magnetorheological brake which can be used as a rehabilitation tool. In fact, the fluid used in this brake is called magnetorheological fluid or MR that properties can change by variation of the magnetic field. The braking properties can be set as control by using this feature of the fluid. In this research, different MR brake designs are first introduced in each design, and the dimensions of the brake have been determined based on the required torque for foot movement. To calculate the brake dimensions, it is assumed that the shear stress distribution in the fluid is uniform and the fluid is in its saturated state. After designing the rehabilitation brake, the mathematical model of the healthy movement of a healthy person is extracted. Due to the nonlinear nature of the system and its variability, various adaptive controllers, neural networks, and robust have been implemented to estimate the parameters and control the system. After calculating torque and control current, the best type of controller in terms of error and control current has been selected. Finally, this controller is implemented on the experimental data of the patient's movements, and the control current is calculated to achieve the desired torque and motion.Keywords: rehabilitation, magnetorheological fluid, knee, brake, adaptive control, robust control, neural network control, torque control
Procedia PDF Downloads 1511124 Drying Kinects of Soybean Seeds
Authors: Amanda Rithieli Pereira Dos Santos, Rute Quelvia De Faria, Álvaro De Oliveira Cardoso, Anderson Rodrigo Da Silva, Érica Leão Fernandes Araújo
Abstract:
The study of the kinetics of drying has great importance for the mathematical modeling, allowing to know about the processes of transference of heat and mass between the products and to adjust dryers managing new technologies for these processes. The present work had the objective of studying the kinetics of drying of soybean seeds and adjusting different statistical models to the experimental data varying cultivar and temperature. Soybean seeds were pre-dried in a natural environment in order to reduce and homogenize the water content to the level of 14% (b.s.). Then, drying was carried out in a forced air circulation oven at controlled temperatures of 38, 43, 48, 53 and 58 ± 1 ° C, using two soybean cultivars, BRS 8780 and Sambaíba, until reaching a hygroscopic equilibrium. The experimental design was completely randomized in factorial 5 x 2 (temperature x cultivar) with 3 replicates. To the experimental data were adjusted eleven statistical models used to explain the drying process of agricultural products. Regression analysis was performed using the least squares Gauss-Newton algorithm to estimate the parameters. The degree of adjustment was evaluated from the analysis of the coefficient of determination (R²), the adjusted coefficient of determination (R² Aj.) And the standard error (S.E). The models that best represent the drying kinetics of soybean seeds are those of Midilli and Logarítmico.Keywords: curve of drying seeds, Glycine max L., moisture ratio, statistical models
Procedia PDF Downloads 6271123 A Proposal of Advanced Key Performance Indicators for Assessing Six Performances of Construction Projects
Authors: Wi Sung Yoo, Seung Woo Lee, Youn Kyoung Hur, Sung Hwan Kim
Abstract:
Large-scale construction projects are continuously increasing, and the need for tools to monitor and evaluate the project success is emphasized. At the construction industry level, there are limitations in deriving performance evaluation factors that reflect the diversity of construction sites and systems that can objectively evaluate and manage performance. Additionally, there are difficulties in integrating structured and unstructured data generated at construction sites and deriving improvements. In this study, we propose the Key Performance Indicators (KPIs) to enable performance evaluation that reflects the increased diversity of construction sites and the unstructured data generated, and present a model for measuring performance by the derived indicators. The comprehensive performance of a unit construction site is assessed based on 6 areas (Time, Cost, Quality, Safety, Environment, Productivity) and 26 indicators. We collect performance indicator information from 30 construction sites that meet legal standards and have been successfully performed. And We apply data augmentation and optimization techniques into establishing measurement standards for each indicator. In other words, the KPI for construction site performance evaluation presented in this study provides standards for evaluating performance in six areas using institutional requirement data and document data. This can be expanded to establish a performance evaluation system considering the scale and type of construction project. Also, they are expected to be used as a comprehensive indicator of the construction industry and used as basic data for tracking competitiveness at the national level and establishing policies.Keywords: key performance indicator, performance measurement, structured and unstructured data, data augmentation
Procedia PDF Downloads 421122 Comparison of Two Theories for the Critical Laser Radius in Thermal Quantum Plasma
Authors: Somaye Zare
Abstract:
The critical beam radius is a significant factor that predicts the behavior of the laser beam in the plasma, so if the laser beam radius is adequately greater in comparison to it, the beam will experience stable focusing on the plasma; otherwise, the beam will diverge after entering into the plasma. In this work, considering the paraxial approximation and moment theories, the localization of a relativistic laser beam in thermal quantum plasma is investigated. Using the dielectric function obtained in the quantum hydrodynamic model, the mathematical equation for the laser beam width parameter is attained and solved numerically by the fourth-order Runge-Kutta method. The results demonstrate that the stouter focusing effect is occurred in the moment theory compared to the paraxial approximation. Besides, similar to the two theories, with increasing Fermi temperature, plasma density, and laser intensity, the oscillation rate of the beam width parameter growths and focusing length reduces which means improving the focusing effect. Furthermore, it is understood that behaviors of the critical laser radius are different in the two theories, in the paraxial approximation, the critical radius after a minimum value is enhanced with increasing laser intensity, but in the moment theory, with increasing laser intensity, the critical radius decreases until it becomes independent of the laser intensity.Keywords: laser localization, quantum plasma, paraxial approximation, moment theory, quantum hydrodynamic model
Procedia PDF Downloads 721121 Conditions of the Anaerobic Digestion of Biomass
Authors: N. Boontian
Abstract:
Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solid-state anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.Keywords: anaerobic digestion, lignocellulosic biomass, methane production, optimization, pretreatment
Procedia PDF Downloads 3791120 Commercial Automobile Insurance: A Practical Approach of the Generalized Additive Model
Authors: Nicolas Plamondon, Stuart Atkinson, Shuzi Zhou
Abstract:
The insurance industry is usually not the first topic one has in mind when thinking about applications of data science. However, the use of data science in the finance and insurance industry is growing quickly for several reasons, including an abundance of reliable customer data, ferocious competition requiring more accurate pricing, etc. Among the top use cases of data science, we find pricing optimization, customer segmentation, customer risk assessment, fraud detection, marketing, and triage analytics. The objective of this paper is to present an application of the generalized additive model (GAM) on a commercial automobile insurance product: an individually rated commercial automobile. These are vehicles used for commercial purposes, but for which there is not enough volume to apply pricing to several vehicles at the same time. The GAM model was selected as an improvement over GLM for its ease of use and its wide range of applications. The model was trained using the largest split of the data to determine model parameters. The remaining part of the data was used as testing data to verify the quality of the modeling activity. We used the Gini coefficient to evaluate the performance of the model. For long-term monitoring, commonly used metrics such as RMSE and MAE will be used. Another topic of interest in the insurance industry is to process of producing the model. We will discuss at a high level the interactions between the different teams with an insurance company that needs to work together to produce a model and then monitor the performance of the model over time. Moreover, we will discuss the regulations in place in the insurance industry. Finally, we will discuss the maintenance of the model and the fact that new data does not come constantly and that some metrics can take a long time to become meaningful.Keywords: insurance, data science, modeling, monitoring, regulation, processes
Procedia PDF Downloads 761119 A Comparative Evaluation of the SIR and SEIZ Epidemiological Models to Describe the Diffusion Characteristics of COVID-19 Polarizing Viewpoints on Online
Authors: Maryam Maleki, Esther Mead, Mohammad Arani, Nitin Agarwal
Abstract:
This study is conducted to examine how opposing viewpoints related to COVID-19 were diffused on Twitter. To accomplish this, six datasets using two epidemiological models, SIR (Susceptible, Infected, Recovered) and SEIZ (Susceptible, Exposed, Infected, Skeptics), were analyzed. The six datasets were chosen because they represent opposing viewpoints on the COVID-19 pandemic. Three of the datasets contain anti-subject hashtags, while the other three contain pro-subject hashtags. The time frame for all datasets is three years, starting from January 2020 to December 2022. The findings revealed that while both models were effective in evaluating the propagation trends of these polarizing viewpoints, the SEIZ model was more accurate with a relatively lower error rate (6.7%) compared to the SIR model (17.3%). Additionally, the relative error for both models was lower for anti-subject hashtags compared to pro-subject hashtags. By leveraging epidemiological models, insights into the propagation trends of polarizing viewpoints on Twitter were gained. This study paves the way for the development of methods to prevent the spread of ideas that lack scientific evidence while promoting the dissemination of scientifically backed ideas.Keywords: mathematical modeling, epidemiological model, seiz model, sir model, covid-19, twitter, social network analysis, social contagion
Procedia PDF Downloads 621118 Simulation of Improving the Efficiency of a Fire-Tube Steam Boiler
Authors: Roudane Mohamed
Abstract:
In this study we are interested in improving the efficiency of a steam boiler to 4.5T/h and minimize fume discharge temperature by the addition of a heat exchanger against the current in the energy system, the output of the boiler. The mathematical approach to the problem is based on the use of heat transfer by convection and conduction equations. These equations have been chosen because of their extensive use in a wide range of application. A software and developed for solving the equations governing these phenomena and the estimation of the thermal characteristics of boiler through the study of the thermal characteristics of the heat exchanger by both LMTD and NUT methods. Subsequently, an analysis of the thermal performance of the steam boiler by studying the influence of different operating parameters on heat flux densities, temperatures, exchanged power and performance was carried out. The study showed that the behavior of the boiler is largely influenced. In the first regime (P = 3.5 bar), the boiler efficiency has improved significantly from 93.03 to 99.43 at the rate of 6.47% and 4.5%. For maximum speed, the change is less important, it is of the order of 1.06%. The results obtained in this study of great interest to industrial utilities equipped with smoke tube boilers for the preheating air temperature intervene to calculate the actual temperature of the gas so the heat exchanged will be increased and minimize temperature smoke discharge. On the other hand, this work could be used as a model of computation in the design process.Keywords: numerical simulation, efficiency, fire tube, heat exchanger, convection and conduction
Procedia PDF Downloads 2181117 Increasing the Forecasting Fidelity of Current Collection System Operating Capability by Means of Contact Pressure Simulation Modelling
Authors: Anton Golubkov, Gleb Ermachkov, Aleksandr Smerdin, Oleg Sidorov, Victor Philippov
Abstract:
Current collection quality is one of the limiting factors when increasing trains movement speed in the rail sector. With the movement speed growth, the impact forces on the current collector from the rolling stock and the aerodynamic influence increase, which leads to the spread in the contact pressure values, separation of the current collector head from the contact wire, contact arcing and excessive wear of the contact elements. The upcoming trend in resolving this issue is the use of the automatic control systems providing stabilization of the contact pressure value. The present paper considers the features of the contemporary automatic control systems of the current collector’s pressure; their major disadvantages have been stated. A scheme of current collector pressure automatic control has been proposed, distinguished by a proactive influence on undesirable effects. A mathematical model of contact strips wearing has been presented, obtained in accordance with the provisions of the central composition rotatable design program. The analysis of the obtained dependencies has been carried out. The procedures for determining the optimal current collector pressure on the contact wire and the pressure control principle in the pneumatic drive have been described.Keywords: contact strip, current collector, high-speed running, program control, wear
Procedia PDF Downloads 1451116 A Quantum Leap: Developing Quantum Semi-Structured Complex Numbers to Solve the “Division by Zero” Problem
Authors: Peter Jean-Paul, Shanaz Wahid
Abstract:
The problem of division by zero can be stated as: “what is the value of 0 x 1/0?” This expression has been considered undefined by mathematicians because it can have two equally valid solutions either 0 or 1. Recently semi-structured complex number set was invented to solve “division by zero”. However, whilst the number set had some merits it was considered to have a poor theoretical foundation and did not provide a quality solution to “division by zero”. Moreover, the set lacked consistency in simple algebraic calculations producing contradictory results when dividing by zero. To overcome these issues this research starts by treating the expression " 0 x 1/0" as a quantum mechanical system that produces two tangled results 0 and 1. Dirac Notation (a tool from quantum mechanics) was then used to redefine the unstructured unit p in semi-structured complex numbers so that p represents the superposition of two results (0 and 1) and collapses into a single value when used in algebraic expressions. In the process, this paper describes a new number set called Quantum Semi-structured Complex Numbers that provides a valid solution to the problem of “division by zero”. This research shows that this new set (1) forms a “Field”, (2) can produce consistent results when solving division by zero problems, (3) can be used to accurately describe systems whose mathematical descriptions involve division by zero. This research served to provide a firm foundation for Quantum Semi-structured Complex Numbers and support their practical use.Keywords: division by zero, semi-structured complex numbers, quantum mechanics, Hilbert space, Euclidean space
Procedia PDF Downloads 1571115 Integrated Free Space Optical Communication and Optical Sensor Network System with Artificial Intelligence Techniques
Authors: Yibeltal Chanie Manie, Zebider Asire Munyelet
Abstract:
5G and 6G technology offers enhanced quality of service with high data transmission rates, which necessitates the implementation of the Internet of Things (IoT) in 5G/6G architecture. In this paper, we proposed the integration of free space optical communication (FSO) with fiber sensor networks for IoT applications. Recently, free-space optical communications (FSO) are gaining popularity as an effective alternative technology to the limited availability of radio frequency (RF) spectrum. FSO is gaining popularity due to flexibility, high achievable optical bandwidth, and low power consumption in several applications of communications, such as disaster recovery, last-mile connectivity, drones, surveillance, backhaul, and satellite communications. Hence, high-speed FSO is an optimal choice for wireless networks to satisfy the full potential of 5G/6G technology, offering 100 Gbit/s or more speed in IoT applications. Moreover, machine learning must be integrated into the design, planning, and optimization of future optical wireless communication networks in order to actualize this vision of intelligent processing and operation. In addition, fiber sensors are important to achieve real-time, accurate, and smart monitoring in IoT applications. Moreover, we proposed deep learning techniques to estimate the strain changes and peak wavelength of multiple Fiber Bragg grating (FBG) sensors using only the spectrum of FBGs obtained from the real experiment.Keywords: optical sensor, artificial Intelligence, Internet of Things, free-space optics
Procedia PDF Downloads 631114 Comparative Study of the Effects of Process Parameters on the Yield of Oil from Melon Seed (Cococynthis citrullus) and Coconut Fruit (Cocos nucifera)
Authors: Ndidi F. Amulu, Patrick E. Amulu, Gordian O. Mbah, Callistus N. Ude
Abstract:
Comparative analysis of the properties of melon seed, coconut fruit and their oil yield were evaluated in this work using standard analytical technique AOAC. The results of the analysis carried out revealed that the moisture contents of the samples studied are 11.15% (melon) and 7.59% (coconut). The crude lipid content are 46.10% (melon) and 55.15% (coconut).The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant difference (p < 0.05) in yield between the samples, with melon oil seed flour having a higher percentage range of oil yield (41.30 – 52.90%) and coconut (36.25 – 49.83%). The physical characterization of the extracted oil was also carried out. The values gotten for refractive index are 1.487 (melon seed oil) and 1.361 (coconut oil) and viscosities are 0.008 (melon seed oil) and 0.002 (coconut oil). The chemical analysis of the extracted oils shows acid value of 1.00mg NaOH/g oil (melon oil), 10.050mg NaOH/g oil (coconut oil) and saponification value of 187.00mg/KOH (melon oil) and 183.26mg/KOH (coconut oil). The iodine value of the melon oil gave 75.00mg I2/g and 81.00mg I2/g for coconut oil. A standard statistical package Minitab version 16.0 was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to optimize the leaching process. Both samples gave high oil yield at the same optimal conditions. The optimal conditions to obtain highest oil yield ≥ 52% (melon seed) and ≥ 48% (coconut seed) are solute - solvent ratio of 40g/ml, leaching time of 2hours and leaching temperature of 50oC. The two samples studied have potential of yielding oil with melon seed giving the higher yield.Keywords: Coconut, Melon, Optimization, Processing
Procedia PDF Downloads 4421113 Cybernetic Modeling of Growth Dynamics of Debaryomyces nepalensis NCYC 3413 and Xylitol Production in Batch Reactor
Authors: J. Sharon Mano Pappu, Sathyanarayana N. Gummadi
Abstract:
Growth of Debaryomyces nepalensis on mixed substrates in batch culture follows diauxic pattern of completely utilizing glucose during the first exponential growth phase, followed by an intermediate lag phase and a second exponential growth phase consuming xylose. The present study deals with the development of cybernetic mathematical model for prediction of xylitol production and yield. Production of xylitol from xylose in batch fermentation is investigated in the presence of glucose as the co-substrate. Different ratios of glucose and xylose concentrations are assessed to study the impact of multi substrate on production of xylitol in batch reactors. The parameters in the model equations were estimated from experimental observations using integral method. The model equations were solved simultaneously by numerical technique using MATLAB. The developed cybernetic model of xylose fermentation in the presence of a co-substrate can provide answers about how the ratio of glucose to xylose influences the yield and rate of production of xylitol. This model is expected to accurately predict the growth of microorganism on mixed substrate, duration of intermediate lag phase, consumption of substrate, production of xylitol. The model developed based on cybernetic modelling framework can be helpful to simulate the dynamic competition between the metabolic pathways.Keywords: co-substrate, cybernetic model, diauxic growth, xylose, xylitol
Procedia PDF Downloads 3281112 Interaction Evaluation of Silver Ion and Silver Nanoparticles with Dithizone Complexes Using DFT Calculations and NMR Analysis
Authors: W. Nootcharin, S. Sujittra, K. Mayuso, K. Kornphimol, M. Rawiwan
Abstract:
Silver has distinct antibacterial properties and has been used as a component of commercial products with many applications. An increasing number of commercial products cause risks of silver effects for human and environment such as the symptoms of Argyria and the release of silver to the environment. Therefore, the detection of silver in the aquatic environment is important. The colorimetric chemosensor is designed by the basic of ligand interactions with a metal ion, leading to the change of signals for the naked-eyes which are very useful method to this application. Dithizone ligand is considered as one of the effective chelating reagents for metal ions due to its high selectivity and sensitivity of a photochromic reaction for silver as well as the linear backbone of dithizone affords the rotation of various isomeric forms. The present study is focused on the conformation and interaction of silver ion and silver nanoparticles (AgNPs) with dithizone using density functional theory (DFT). The interaction parameters were determined in term of binding energy of complexes and the geometry optimization, frequency of the structures and calculation of binding energies using density functional approaches B3LYP and the 6-31G(d,p) basis set. Moreover, the interaction of silver–dithizone complexes was supported by UV–Vis spectroscopy, FT-IR spectrum that was simulated by using B3LYP/6-31G(d,p) and 1H NMR spectra calculation using B3LYP/6-311+G(2d,p) method compared with the experimental data. The results showed the ion exchange interaction between hydrogen of dithizone and silver atom, with minimized binding energies of silver–dithizone interaction. However, the result of AgNPs in the form of complexes with dithizone. Moreover, the AgNPs-dithizone complexes were confirmed by using transmission electron microscope (TEM). Therefore, the results can be the useful information for determination of complex interaction using the analysis of computer simulations.Keywords: silver nanoparticles, dithizone, DFT, NMR
Procedia PDF Downloads 2071111 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network
Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi
Abstract:
Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication
Procedia PDF Downloads 4511110 Optimal Opportunistic Maintenance Policy for a Two-Unit System
Authors: Nooshin Salari, Viliam Makis, Jane Doe
Abstract:
This paper presents a maintenance policy for a system consisting of two units. Unit 1 is gradually deteriorating and is subject to soft failure. Unit 2 has a general lifetime distribution and is subject to hard failure. Condition of unit 1 of the system is monitored periodically and it is considered as failed when its deterioration level reaches or exceeds a critical level N. At the failure time of unit 2 system is considered as failed, and unit 2 will be correctively replaced by the next inspection epoch. Unit 1 or 2 are preventively replaced when deterioration level of unit 1 or age of unit 2 exceeds the related preventive maintenance (PM) levels. At the time of corrective or preventive replacement of unit 2, there is an opportunity to replace unit 1 if its deterioration level reaches the opportunistic maintenance (OM) level. If unit 2 fails in an inspection interval, system stops operating although unit 1 has not failed. A mathematical model is derived to find the preventive and opportunistic replacement levels for unit 1 and preventive replacement age for unit 2, that minimize the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. Numerical example is provided to illustrate the performance of the proposed model and the comparison of the proposed model with an optimal policy without opportunistic maintenance level for unit 1 is carried out.Keywords: condition-based maintenance, opportunistic maintenance, preventive maintenance, two-unit system
Procedia PDF Downloads 2001109 Application of the Total Least Squares Estimation Method for an Aircraft Aerodynamic Model Identification
Authors: Zaouche Mohamed, Amini Mohamed, Foughali Khaled, Aitkaid Souhila, Bouchiha Nihad Sarah
Abstract:
The aerodynamic coefficients are important in the evaluation of an aircraft performance and stability-control characteristics. These coefficients also can be used in the automatic flight control systems and mathematical model of flight simulator. The study of the aerodynamic aspect of flying systems is a reserved domain and inaccessible for the developers. Doing tests in a wind tunnel to extract aerodynamic forces and moments requires a specific and expensive means. Besides, the glaring lack of published documentation in this field of study makes the aerodynamic coefficients determination complicated. This work is devoted to the identification of an aerodynamic model, by using an aircraft in virtual simulated environment. We deal with the identification of the system, we present an environment framework based on Software In the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. We propose The Total Least Squares Estimation technique (TLSE) to identify the aerodynamic parameters, which are unknown, variable, classified and used in the expression of the piloting law. In this paper, we define each aerodynamic coefficient as the mean of its numerical values. All other variations are considered as modeling uncertainties that will be compensated by the robustness of the piloting control.Keywords: aircraft aerodynamic model, total least squares estimation, piloting the aircraft, robust control, Microsoft Flight Simulator, MQ-1 predator
Procedia PDF Downloads 2871108 From Comfort to Safety: Assessing the Influence of Car Seat Design on Driver Reaction and Performance
Authors: Sabariah Mohd Yusoff, Qamaruddin Adzeem Muhamad Murad
Abstract:
This study investigates the impact of car seat design on driver response time, addressing a critical gap in understanding how ergonomic features influence both performance and safety. Controlled driving experiments were conducted with fourteen participants (11 male, 3 female) across three locations chosen for their varying traffic conditions to account for differences in driver alertness. Participants interacted with various seat designs while performing driving tasks, and objective metrics such as braking and steering response times were meticulously recorded. Advanced statistical methods, including regression analysis and t-tests, were employed to identify design factors that significantly affect driver response times. Subjective feedback was gathered through detailed questionnaires—focused on driving experience and knowledge of response time—and in-depth interviews. This qualitative data was analyzed thematically to provide insights into driver comfort and usability preferences. The study aims to identify key seat design features that impact driver response time and to gain a deeper understanding of driver preferences for comfort and usability. The findings are expected to inform evidence-based guidelines for optimizing car seat design, ultimately enhancing driver performance and safety. The research offers valuable implications for automotive manufacturers and designers, contributing to the development of seats that improve driver response time and overall driving safety.Keywords: car seat design, driver response time, cognitive driving, ergonomics optimization
Procedia PDF Downloads 241107 Studying Educational Processes through a Multifocal Viewpoint: Educational and Social Studies
Authors: Noa Shriki, Atara Shriki
Abstract:
Lifelong learning is considered as essential for teacher's professional development, which in turn has implications for the improvement of the entire education system. In recent years, many programs designed to support teachers' professional development are criticized for not achieving their goal. A variety of reasons have been proposed for the purpose of explaining the causes of the ineffectiveness of such programs. In this study, we put to test the possibility that teachers do not change as a result of their participation in professional programs due to a gap between the contents and approaches included in them and teacher's beliefs about teaching and learning. Eighteen elementary school mathematics teachers participated in the study. These teachers were involved in collaborating with their students in inquiring mathematical ideas, while implementing action research. Employing educational theories, the results indicated that this experience had a positive effect on teacher's professional development. In particular, there was an evident change in their beliefs regarding their role as mathematics teachers. However, while employing a different perspective for analyzing the data, the lens of Kurt Lewin's theory of re-education, we realized that this change of beliefs must be questioned. Therefore, it is suggested that analysis of educational processes should be carried out not only through common educational theories, but also on the basis of social and organizational theories. It is assumed that both the field of education and the fields of social studies and organizational consulting will benefit from the multifocal viewpointKeywords: educational theories, professional development, re-education, teachers' beliefs
Procedia PDF Downloads 1411106 Longitudinal Vibration of a Micro-Beam in a Micro-Scale Fluid Media
Authors: M. Ghanbari, S. Hossainpour, G. Rezazadeh
Abstract:
In this paper, longitudinal vibration of a micro-beam in micro-scale fluid media has been investigated. The proposed mathematical model for this study is made up of a micro-beam and a micro-plate at its free end. An AC voltage is applied to the pair of piezoelectric layers on the upper and lower surfaces of the micro-beam in order to actuate it longitudinally. The whole structure is bounded between two fixed plates on its upper and lower surfaces. The micro-gap between the structure and the fixed plates is filled with fluid. Fluids behave differently in micro-scale than macro, so the fluid field in the gap has been modeled based on micro-polar theory. The coupled governing equations of motion of the micro-beam and the micro-scale fluid field have been derived. Due to having non-homogenous boundary conditions, derived equations have been transformed to an enhanced form with homogenous boundary conditions. Using Galerkin-based reduced order model, the enhanced equations have been discretized over the beam and fluid domains and solve simultaneously in order to obtain force response of the micro-beam. Effects of micro-polar parameters of the fluid as characteristic length scale, coupling parameter and surface parameter on the response of the micro-beam have been studied.Keywords: micro-polar theory, Galerkin method, MEMS, micro-fluid
Procedia PDF Downloads 1841105 A Geogpraphic Overview about Offshore Energy Cleantech in Portugal
Authors: Ana Pego
Abstract:
Environmental technologies were developed for decades. Clean technologies emerged a few years ago. In these perspectives, the use of cleantech technologies has become very important due the fact of new era of environmental feats. As such, the market itself has become more competitive, more collaborative towards a better use of clean technologies. This paper shows the importance of clean technologies in offshore energy sector in Portuguese market, its localization and its impact on economy. Clean technologies are directly related with renewable cluster and concomitant with economic and social resource optimization criteria, geographic aspects, climate change and soil features. Cleantech is related with regional development, socio-technical transitions in organisations. There are an economical and social combinations which allow specialisation of regions in activities, higher employment, reduce of energy costs, local knowledge spillover and, business collaboration and competitiveness. The methodology used will be quantitative (IO matrix for Portugal 2013) and qualitative (questionnaires to stakeholders). The mix of both methodologies will confirm whether the use of technologies will allow a positive impact on economic and social variables used on this model. It is expected a positive impact on Portuguese economy both in investment and employment taking in account the localization of offshore renewable activities. This means that the importance of offshore renewable investment in Portugal has a few points which should be pointed out: the increase of specialised employment, localization of specific activities in territory, and increase of value added in certain regions. The conclusion will allow researchers and organisation to compare the Portuguese model to other European regions in order to a better use of natural and human resources.Keywords: cleantech, economic impact, localisation, territory dynamics
Procedia PDF Downloads 2271104 Inventory Management System of Seasonal Raw Materials of Feeds at San Jose Batangas through Integer Linear Programming and VBA
Authors: Glenda Marie D. Balitaan
Abstract:
The branch of business management that deals with inventory planning and control is known as inventory management. It comprises keeping track of supply levels and forecasting demand, as well as scheduling when and how to plan. Keeping excess inventory results in a loss of money, takes up physical space, and raises the risk of damage, spoilage, and loss. On the other hand, too little inventory frequently causes operations to be disrupted and raises the possibility of low customer satisfaction, both of which can be detrimental to a company's reputation. The United Victorious Feed mill Corporation's present inventory management practices were assessed in terms of inventory level, warehouse allocation, ordering frequency, shelf life, and production requirement. To help the company achieve their optimal level of inventory, a mathematical model was created using Integer Linear Programming. Due to the season, the goal function was to reduce the cost of purchasing US Soya and Yellow Corn. Warehouse space, annual production requirements, and shelf life were all considered. To ensure that the user only uses one application to record all relevant information, like production output and delivery, the researcher built a Visual Basic system. Additionally, the technology allows management to change the model's parameters.Keywords: inventory management, integer linear programming, inventory management system, feed mill
Procedia PDF Downloads 831103 Study on the Spatial Vitality of Waterfront Rail Transit Station Area: A Case Study of Main Urban Area in Chongqing
Authors: Lianxue Shi
Abstract:
Urban waterfront rail transit stations exert a dual impact on both the waterfront and the transit station, resulting in a concentration of development elements in the surrounding space. In order to more effectively develop the space around the station, this study focuses on the perspective of the integration of station, city, and people. Taking Chongqing as an example, based on the Arc GIS platform, it explores the vitality of the site from the three dimensions of crowd activity heat, space facilities heat, and spatial accessibility. It conducts a comprehensive evaluation and interpretation of the vitality surrounding the waterfront rail transit station area in Chongqing. The study found that (1) the spatial vitality in the vicinity of waterfront rail transit stations is correlated with the waterfront's functional zoning and the intensity of development. Stations situated in waterfront residential and public spaces are more likely to experience a convergence of people, whereas those located in waterfront industrial areas exhibit lower levels of vitality. (2) Effective transportation accessibility plays a pivotal role in maintaining a steady flow of passengers and facilitating their movement. However, the three-dimensionality of urban space in mountainous regions is a notable challenge, leading to some stations experiencing limited accessibility. This underscores the importance of enhancing the optimization of walking space, particularly the access routes from the station to the waterfront area. (3) The density of spatial facilities around waterfront stations in old urban areas lags behind the population's needs, indicating a need to strengthen the allocation of relevant land and resources in these areas.Keywords: rail transit station, waterfront, influence area, spatial vitality, urban vitality
Procedia PDF Downloads 311102 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 2751101 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain
Authors: Bita Payami-Shabestari, Dariush Eslami
Abstract:
The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory
Procedia PDF Downloads 1291100 Development of an Interactive Display-Control Layout Design System for Trains Based on Train Drivers’ Mental Models
Authors: Hyeonkyeong Yang, Minseok Son, Taekbeom Yoo, Woojin Park
Abstract:
Human error is the most salient contributing factor to railway accidents. To reduce the frequency of human errors, many researchers and train designers have adopted ergonomic design principles for designing display-control layout in rail cab. There exist a number of approaches for designing the display control layout based on optimization methods. However, the ergonomically optimized layout design may not be the best design for train drivers, since the drivers have their own mental models based on their experiences. Consequently, the drivers may prefer the existing display-control layout design over the optimal design, and even show better driving performance using the existing design compared to that using the optimal design. Thus, in addition to ergonomic design principles, train drivers’ mental models also need to be considered for designing display-control layout in rail cab. This paper developed an ergonomic assessment system of display-control layout design, and an interactive layout design system that can generate design alternatives and calculate ergonomic assessment score in real-time. The design alternatives generated from the interactive layout design system may not include the optimal design from the ergonomics point of view. However, the system’s strength is that it considers train drivers’ mental models, which can help generate alternatives that are more friendly and easier to use for train drivers. Also, with the developed system, non-experts in ergonomics, such as train drivers, can refine the design alternatives and improve ergonomic assessment score in real-time.Keywords: display-control layout design, interactive layout design system, mental model, train drivers
Procedia PDF Downloads 3061099 Energy Efficiency and Sustainability Analytics for Reducing Carbon Emissions in Oil Refineries
Authors: Gaurav Kumar Sinha
Abstract:
The oil refining industry, significant in its energy consumption and carbon emissions, faces increasing pressure to reduce its environmental footprint. This article explores the application of energy efficiency and sustainability analytics as crucial tools for reducing carbon emissions in oil refineries. Through a comprehensive review of current practices and technologies, this study highlights innovative analytical approaches that can significantly enhance energy efficiency. We focus on the integration of advanced data analytics, including machine learning and predictive modeling, to optimize process controls and energy use. These technologies are examined for their potential to not only lower energy consumption but also reduce greenhouse gas emissions. Additionally, the article discusses the implementation of sustainability analytics to monitor and improve environmental performance across various operational facets of oil refineries. We explore case studies where predictive analytics have successfully identified opportunities for reducing energy use and emissions, providing a template for industry-wide application. The challenges associated with deploying these analytics, such as data integration and the need for skilled personnel, are also addressed. The paper concludes with strategic recommendations for oil refineries aiming to enhance their sustainability practices through the adoption of targeted analytics. By implementing these measures, refineries can achieve significant reductions in carbon emissions, aligning with global environmental goals and regulatory requirements.Keywords: energy efficiency, sustainability analytics, carbon emissions, oil refineries, data analytics, machine learning, predictive modeling, process optimization, greenhouse gas reduction, environmental performance
Procedia PDF Downloads 311098 A Study on Prediction Model for Thermally Grown Oxide Layer in Thermal Barrier Coating
Authors: Yongseok Kim, Jeong-Min Lee, Hyunwoo Song, Junghan Yun, Jungin Byun, Jae-Mean Koo, Chang-Sung Seok
Abstract:
Thermal barrier coating(TBC) is applied for gas turbine components to protect the components from extremely high temperature condition. Since metallic substrate cannot endure such severe condition of gas turbines, delamination of TBC can cause failure of the system. Thus, delamination life of TBC is one of the most important issues for designing the components operating at high temperature condition. Thermal stress caused by thermally grown oxide(TGO) layer is known as one of the major failure mechanisms of TBC. Thermal stress by TGO mainly occurs at the interface between TGO layer and ceramic top coat layer, and it is strongly influenced by the thickness and shape of TGO layer. In this study, Isothermal oxidation is conducted on coin-type TBC specimens prepared by APS(air plasma spray) method. After the isothermal oxidation at various temperature and time condition, the thickness and shape(rumpling shape) of the TGO is investigated, and the test data is processed by numerical analysis. Finally, the test data is arranged into a mathematical prediction model with two variables(temperature and exposure time) which can predict the thickness and rumpling shape of TGO.Keywords: thermal barrier coating, thermally grown oxide, thermal stress, isothermal oxidation, numerical analysis
Procedia PDF Downloads 3421097 Noise Source Identification on Urban Construction Sites Using Signal Time Delay Analysis
Authors: Balgaisha G. Mukanova, Yelbek B. Utepov, Aida G. Nazarova, Alisher Z. Imanov
Abstract:
The problem of identifying local noise sources on a construction site using a sensor system is considered. Mathematical modeling of detected signals on sensors was carried out, considering signal decay and signal delay time between the source and detector. Recordings of noises produced by construction tools were used as a dependence of noise on time. Synthetic sensor data was constructed based on these data, and a model of the propagation of acoustic waves from a point source in the three-dimensional space was applied. All sensors and sources are assumed to be located in the same plane. A source localization method is checked based on the signal time delay between two adjacent detectors and plotting the direction of the source. Based on the two direct lines' crossline, the noise source's position is determined. Cases of one dominant source and the case of two sources in the presence of several other sources of lower intensity are considered. The number of detectors varies from three to eight detectors. The intensity of the noise field in the assessed area is plotted. The signal of a two-second duration is considered. The source is located for subsequent parts of the signal with a duration above 0.04 sec; the final result is obtained by computing the average value.Keywords: acoustic model, direction of arrival, inverse source problem, sound localization, urban noises
Procedia PDF Downloads 621096 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations
Authors: Marta Błażkiewicz-Mazurek, Adam Konefał
Abstract:
The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling
Procedia PDF Downloads 29